00:00:00.001 Started by upstream project "autotest-per-patch" build number 120580 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.100 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.100 The recommended git tool is: git 00:00:00.100 using credential 00000000-0000-0000-0000-000000000002 00:00:00.101 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.126 Fetching changes from the remote Git repository 00:00:00.132 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.171 Using shallow fetch with depth 1 00:00:00.171 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.171 > git --version # timeout=10 00:00:00.200 > git --version # 'git version 2.39.2' 00:00:00.200 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.201 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.201 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.156 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.169 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.183 Checking out Revision a704ed4d86859cb8cbec080c78b138476da6ee34 (FETCH_HEAD) 00:00:07.183 > git config core.sparsecheckout # timeout=10 00:00:07.195 > git read-tree -mu HEAD # timeout=10 00:00:07.213 > git checkout -f a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=5 00:00:07.232 Commit message: "packer: Insert post-processors only if at least one is defined" 00:00:07.232 > git rev-list --no-walk a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=10 00:00:07.330 [Pipeline] Start of Pipeline 00:00:07.345 [Pipeline] library 00:00:07.347 Loading library shm_lib@master 00:00:07.347 Library shm_lib@master is cached. Copying from home. 00:00:07.365 [Pipeline] node 00:00:22.367 Still waiting to schedule task 00:00:22.368 Waiting for next available executor on ‘vagrant-vm-host’ 00:10:38.771 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:10:38.773 [Pipeline] { 00:10:38.785 [Pipeline] catchError 00:10:38.786 [Pipeline] { 00:10:38.804 [Pipeline] wrap 00:10:38.814 [Pipeline] { 00:10:38.823 [Pipeline] stage 00:10:38.825 [Pipeline] { (Prologue) 00:10:38.847 [Pipeline] echo 00:10:38.849 Node: VM-host-WFP1 00:10:38.855 [Pipeline] cleanWs 00:10:38.863 [WS-CLEANUP] Deleting project workspace... 00:10:38.863 [WS-CLEANUP] Deferred wipeout is used... 00:10:38.869 [WS-CLEANUP] done 00:10:39.032 [Pipeline] setCustomBuildProperty 00:10:39.090 [Pipeline] nodesByLabel 00:10:39.091 Found a total of 1 nodes with the 'sorcerer' label 00:10:39.105 [Pipeline] httpRequest 00:10:39.110 HttpMethod: GET 00:10:39.110 URL: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:10:39.113 Sending request to url: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:10:39.115 Response Code: HTTP/1.1 200 OK 00:10:39.116 Success: Status code 200 is in the accepted range: 200,404 00:10:39.116 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:10:39.254 [Pipeline] sh 00:10:39.533 + tar --no-same-owner -xf jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:10:39.551 [Pipeline] httpRequest 00:10:39.555 HttpMethod: GET 00:10:39.555 URL: http://10.211.164.101/packages/spdk_ce34c7fd8070d809da49114e6354281a60a27df5.tar.gz 00:10:39.556 Sending request to url: http://10.211.164.101/packages/spdk_ce34c7fd8070d809da49114e6354281a60a27df5.tar.gz 00:10:39.557 Response Code: HTTP/1.1 200 OK 00:10:39.558 Success: Status code 200 is in the accepted range: 200,404 00:10:39.558 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_ce34c7fd8070d809da49114e6354281a60a27df5.tar.gz 00:10:41.725 [Pipeline] sh 00:10:42.009 + tar --no-same-owner -xf spdk_ce34c7fd8070d809da49114e6354281a60a27df5.tar.gz 00:10:44.572 [Pipeline] sh 00:10:44.853 + git -C spdk log --oneline -n5 00:10:44.853 ce34c7fd8 raid: move blocklen_shift to r5f_info 00:10:44.853 d61131ae0 raid5f: interleaved md support 00:10:44.853 02f0918d1 ut/raid: allow testing interleaved md 00:10:44.853 e69c6fb44 ut/raid: refactor setting test params 00:10:44.853 5e7f316cf raid: superblock interleaved md support 00:10:44.873 [Pipeline] writeFile 00:10:44.890 [Pipeline] sh 00:10:45.171 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:10:45.183 [Pipeline] sh 00:10:45.464 + cat autorun-spdk.conf 00:10:45.464 SPDK_RUN_FUNCTIONAL_TEST=1 00:10:45.464 SPDK_TEST_NVMF=1 00:10:45.464 SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:45.464 SPDK_TEST_USDT=1 00:10:45.464 SPDK_TEST_NVMF_MDNS=1 00:10:45.464 SPDK_RUN_UBSAN=1 00:10:45.464 NET_TYPE=virt 00:10:45.464 SPDK_JSONRPC_GO_CLIENT=1 00:10:45.464 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:45.471 RUN_NIGHTLY=0 00:10:45.473 [Pipeline] } 00:10:45.491 [Pipeline] // stage 00:10:45.507 [Pipeline] stage 00:10:45.510 [Pipeline] { (Run VM) 00:10:45.525 [Pipeline] sh 00:10:45.821 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:10:45.821 + echo 'Start stage prepare_nvme.sh' 00:10:45.821 Start stage prepare_nvme.sh 00:10:45.821 + [[ -n 7 ]] 00:10:45.821 + disk_prefix=ex7 00:10:45.821 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:10:45.821 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:10:45.821 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:10:45.821 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:10:45.821 ++ SPDK_TEST_NVMF=1 00:10:45.821 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:45.821 ++ SPDK_TEST_USDT=1 00:10:45.821 ++ SPDK_TEST_NVMF_MDNS=1 00:10:45.821 ++ SPDK_RUN_UBSAN=1 00:10:45.821 ++ NET_TYPE=virt 00:10:45.821 ++ SPDK_JSONRPC_GO_CLIENT=1 00:10:45.821 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:45.821 ++ RUN_NIGHTLY=0 00:10:45.821 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:10:45.821 + nvme_files=() 00:10:45.821 + declare -A nvme_files 00:10:45.821 + backend_dir=/var/lib/libvirt/images/backends 00:10:45.821 + nvme_files['nvme.img']=5G 00:10:45.821 + nvme_files['nvme-cmb.img']=5G 00:10:45.821 + nvme_files['nvme-multi0.img']=4G 00:10:45.821 + nvme_files['nvme-multi1.img']=4G 00:10:45.821 + nvme_files['nvme-multi2.img']=4G 00:10:45.821 + nvme_files['nvme-openstack.img']=8G 00:10:45.821 + nvme_files['nvme-zns.img']=5G 00:10:45.821 + (( SPDK_TEST_NVME_PMR == 1 )) 00:10:45.821 + (( SPDK_TEST_FTL == 1 )) 00:10:45.821 + (( SPDK_TEST_NVME_FDP == 1 )) 00:10:45.821 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:10:45.821 + for nvme in "${!nvme_files[@]}" 00:10:45.821 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:10:45.821 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:10:45.821 + for nvme in "${!nvme_files[@]}" 00:10:45.821 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:10:45.821 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:10:45.821 + for nvme in "${!nvme_files[@]}" 00:10:45.821 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:10:45.821 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:10:45.821 + for nvme in "${!nvme_files[@]}" 00:10:45.821 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:10:45.821 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:10:45.821 + for nvme in "${!nvme_files[@]}" 00:10:45.821 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:10:45.821 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:10:45.821 + for nvme in "${!nvme_files[@]}" 00:10:45.821 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:10:45.821 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:10:46.079 + for nvme in "${!nvme_files[@]}" 00:10:46.079 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:10:46.079 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:10:46.337 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:10:46.337 + echo 'End stage prepare_nvme.sh' 00:10:46.337 End stage prepare_nvme.sh 00:10:46.349 [Pipeline] sh 00:10:46.629 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:10:46.629 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:10:46.629 00:10:46.630 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:10:46.630 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:10:46.630 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:10:46.630 HELP=0 00:10:46.630 DRY_RUN=0 00:10:46.630 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:10:46.630 NVME_DISKS_TYPE=nvme,nvme, 00:10:46.630 NVME_AUTO_CREATE=0 00:10:46.630 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:10:46.630 NVME_CMB=,, 00:10:46.630 NVME_PMR=,, 00:10:46.630 NVME_ZNS=,, 00:10:46.630 NVME_MS=,, 00:10:46.630 NVME_FDP=,, 00:10:46.630 SPDK_VAGRANT_DISTRO=fedora38 00:10:46.630 SPDK_VAGRANT_VMCPU=10 00:10:46.630 SPDK_VAGRANT_VMRAM=12288 00:10:46.630 SPDK_VAGRANT_PROVIDER=libvirt 00:10:46.630 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:10:46.630 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:10:46.630 SPDK_OPENSTACK_NETWORK=0 00:10:46.630 VAGRANT_PACKAGE_BOX=0 00:10:46.630 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:10:46.630 FORCE_DISTRO=true 00:10:46.630 VAGRANT_BOX_VERSION= 00:10:46.630 EXTRA_VAGRANTFILES= 00:10:46.630 NIC_MODEL=e1000 00:10:46.630 00:10:46.630 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:10:46.630 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:10:49.163 Bringing machine 'default' up with 'libvirt' provider... 00:10:50.539 ==> default: Creating image (snapshot of base box volume). 00:10:50.798 ==> default: Creating domain with the following settings... 00:10:50.798 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1713452285_54002788709172203866 00:10:50.798 ==> default: -- Domain type: kvm 00:10:50.798 ==> default: -- Cpus: 10 00:10:50.798 ==> default: -- Feature: acpi 00:10:50.798 ==> default: -- Feature: apic 00:10:50.798 ==> default: -- Feature: pae 00:10:50.798 ==> default: -- Memory: 12288M 00:10:50.798 ==> default: -- Memory Backing: hugepages: 00:10:50.798 ==> default: -- Management MAC: 00:10:50.798 ==> default: -- Loader: 00:10:50.798 ==> default: -- Nvram: 00:10:50.798 ==> default: -- Base box: spdk/fedora38 00:10:50.798 ==> default: -- Storage pool: default 00:10:50.798 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1713452285_54002788709172203866.img (20G) 00:10:50.798 ==> default: -- Volume Cache: default 00:10:50.798 ==> default: -- Kernel: 00:10:50.798 ==> default: -- Initrd: 00:10:50.798 ==> default: -- Graphics Type: vnc 00:10:50.798 ==> default: -- Graphics Port: -1 00:10:50.798 ==> default: -- Graphics IP: 127.0.0.1 00:10:50.798 ==> default: -- Graphics Password: Not defined 00:10:50.798 ==> default: -- Video Type: cirrus 00:10:50.798 ==> default: -- Video VRAM: 9216 00:10:50.798 ==> default: -- Sound Type: 00:10:50.798 ==> default: -- Keymap: en-us 00:10:50.798 ==> default: -- TPM Path: 00:10:50.798 ==> default: -- INPUT: type=mouse, bus=ps2 00:10:50.798 ==> default: -- Command line args: 00:10:50.798 ==> default: -> value=-device, 00:10:50.798 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:10:50.798 ==> default: -> value=-drive, 00:10:50.798 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:10:50.798 ==> default: -> value=-device, 00:10:50.798 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:50.798 ==> default: -> value=-device, 00:10:50.798 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:10:50.798 ==> default: -> value=-drive, 00:10:50.798 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:10:50.798 ==> default: -> value=-device, 00:10:50.798 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:50.798 ==> default: -> value=-drive, 00:10:50.798 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:10:50.798 ==> default: -> value=-device, 00:10:50.798 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:50.798 ==> default: -> value=-drive, 00:10:50.798 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:10:50.798 ==> default: -> value=-device, 00:10:50.798 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:51.366 ==> default: Creating shared folders metadata... 00:10:51.366 ==> default: Starting domain. 00:10:54.735 ==> default: Waiting for domain to get an IP address... 00:11:12.836 ==> default: Waiting for SSH to become available... 00:11:13.780 ==> default: Configuring and enabling network interfaces... 00:11:19.098 default: SSH address: 192.168.121.243:22 00:11:19.098 default: SSH username: vagrant 00:11:19.098 default: SSH auth method: private key 00:11:22.392 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:11:30.585 ==> default: Mounting SSHFS shared folder... 00:11:33.120 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:11:33.120 ==> default: Checking Mount.. 00:11:34.497 ==> default: Folder Successfully Mounted! 00:11:34.497 ==> default: Running provisioner: file... 00:11:35.881 default: ~/.gitconfig => .gitconfig 00:11:36.149 00:11:36.149 SUCCESS! 00:11:36.149 00:11:36.149 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:11:36.149 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:11:36.149 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:11:36.149 00:11:36.158 [Pipeline] } 00:11:36.178 [Pipeline] // stage 00:11:36.190 [Pipeline] dir 00:11:36.190 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:11:36.192 [Pipeline] { 00:11:36.206 [Pipeline] catchError 00:11:36.208 [Pipeline] { 00:11:36.222 [Pipeline] sh 00:11:36.504 + vagrant ssh-config --host vagrant 00:11:36.504 + sed -ne /^Host/,$p 00:11:36.504 + tee ssh_conf 00:11:39.790 Host vagrant 00:11:39.790 HostName 192.168.121.243 00:11:39.790 User vagrant 00:11:39.790 Port 22 00:11:39.790 UserKnownHostsFile /dev/null 00:11:39.790 StrictHostKeyChecking no 00:11:39.790 PasswordAuthentication no 00:11:39.790 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:11:39.790 IdentitiesOnly yes 00:11:39.790 LogLevel FATAL 00:11:39.790 ForwardAgent yes 00:11:39.790 ForwardX11 yes 00:11:39.790 00:11:39.803 [Pipeline] withEnv 00:11:39.805 [Pipeline] { 00:11:39.819 [Pipeline] sh 00:11:40.096 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:11:40.096 source /etc/os-release 00:11:40.096 [[ -e /image.version ]] && img=$(< /image.version) 00:11:40.096 # Minimal, systemd-like check. 00:11:40.096 if [[ -e /.dockerenv ]]; then 00:11:40.096 # Clear garbage from the node's name: 00:11:40.096 # agt-er_autotest_547-896 -> autotest_547-896 00:11:40.096 # $HOSTNAME is the actual container id 00:11:40.096 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:11:40.096 if mountpoint -q /etc/hostname; then 00:11:40.096 # We can assume this is a mount from a host where container is running, 00:11:40.096 # so fetch its hostname to easily identify the target swarm worker. 00:11:40.096 container="$(< /etc/hostname) ($agent)" 00:11:40.096 else 00:11:40.096 # Fallback 00:11:40.096 container=$agent 00:11:40.096 fi 00:11:40.096 fi 00:11:40.096 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:11:40.096 00:11:40.366 [Pipeline] } 00:11:40.386 [Pipeline] // withEnv 00:11:40.394 [Pipeline] setCustomBuildProperty 00:11:40.409 [Pipeline] stage 00:11:40.411 [Pipeline] { (Tests) 00:11:40.432 [Pipeline] sh 00:11:40.709 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:11:40.985 [Pipeline] timeout 00:11:40.986 Timeout set to expire in 40 min 00:11:40.988 [Pipeline] { 00:11:41.006 [Pipeline] sh 00:11:41.286 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:11:41.852 HEAD is now at ce34c7fd8 raid: move blocklen_shift to r5f_info 00:11:41.865 [Pipeline] sh 00:11:42.144 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:11:42.414 [Pipeline] sh 00:11:42.693 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:11:42.966 [Pipeline] sh 00:11:43.312 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:11:43.572 ++ readlink -f spdk_repo 00:11:43.572 + DIR_ROOT=/home/vagrant/spdk_repo 00:11:43.572 + [[ -n /home/vagrant/spdk_repo ]] 00:11:43.572 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:11:43.572 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:11:43.572 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:11:43.572 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:11:43.572 + [[ -d /home/vagrant/spdk_repo/output ]] 00:11:43.572 + cd /home/vagrant/spdk_repo 00:11:43.572 + source /etc/os-release 00:11:43.572 ++ NAME='Fedora Linux' 00:11:43.572 ++ VERSION='38 (Cloud Edition)' 00:11:43.572 ++ ID=fedora 00:11:43.572 ++ VERSION_ID=38 00:11:43.572 ++ VERSION_CODENAME= 00:11:43.572 ++ PLATFORM_ID=platform:f38 00:11:43.572 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:11:43.572 ++ ANSI_COLOR='0;38;2;60;110;180' 00:11:43.572 ++ LOGO=fedora-logo-icon 00:11:43.572 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:11:43.572 ++ HOME_URL=https://fedoraproject.org/ 00:11:43.572 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:11:43.572 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:11:43.572 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:11:43.572 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:11:43.572 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:11:43.572 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:11:43.572 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:11:43.572 ++ SUPPORT_END=2024-05-14 00:11:43.572 ++ VARIANT='Cloud Edition' 00:11:43.572 ++ VARIANT_ID=cloud 00:11:43.572 + uname -a 00:11:43.572 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:11:43.572 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:44.141 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:44.141 Hugepages 00:11:44.141 node hugesize free / total 00:11:44.141 node0 1048576kB 0 / 0 00:11:44.141 node0 2048kB 0 / 0 00:11:44.141 00:11:44.141 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:44.141 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:11:44.141 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:11:44.141 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:11:44.141 + rm -f /tmp/spdk-ld-path 00:11:44.141 + source autorun-spdk.conf 00:11:44.141 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:11:44.141 ++ SPDK_TEST_NVMF=1 00:11:44.141 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:11:44.141 ++ SPDK_TEST_USDT=1 00:11:44.141 ++ SPDK_TEST_NVMF_MDNS=1 00:11:44.141 ++ SPDK_RUN_UBSAN=1 00:11:44.141 ++ NET_TYPE=virt 00:11:44.141 ++ SPDK_JSONRPC_GO_CLIENT=1 00:11:44.141 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:44.141 ++ RUN_NIGHTLY=0 00:11:44.141 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:11:44.141 + [[ -n '' ]] 00:11:44.141 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:11:44.141 + for M in /var/spdk/build-*-manifest.txt 00:11:44.141 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:11:44.141 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:11:44.141 + for M in /var/spdk/build-*-manifest.txt 00:11:44.141 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:11:44.141 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:11:44.141 ++ uname 00:11:44.141 + [[ Linux == \L\i\n\u\x ]] 00:11:44.141 + sudo dmesg -T 00:11:44.400 + sudo dmesg --clear 00:11:44.400 + dmesg_pid=5104 00:11:44.400 + sudo dmesg -Tw 00:11:44.400 + [[ Fedora Linux == FreeBSD ]] 00:11:44.400 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:44.400 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:44.400 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:11:44.400 + [[ -x /usr/src/fio-static/fio ]] 00:11:44.400 + export FIO_BIN=/usr/src/fio-static/fio 00:11:44.400 + FIO_BIN=/usr/src/fio-static/fio 00:11:44.400 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:11:44.400 + [[ ! -v VFIO_QEMU_BIN ]] 00:11:44.400 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:11:44.400 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:44.400 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:44.400 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:11:44.400 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:44.400 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:44.400 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:11:44.400 Test configuration: 00:11:44.400 SPDK_RUN_FUNCTIONAL_TEST=1 00:11:44.400 SPDK_TEST_NVMF=1 00:11:44.400 SPDK_TEST_NVMF_TRANSPORT=tcp 00:11:44.400 SPDK_TEST_USDT=1 00:11:44.400 SPDK_TEST_NVMF_MDNS=1 00:11:44.400 SPDK_RUN_UBSAN=1 00:11:44.400 NET_TYPE=virt 00:11:44.400 SPDK_JSONRPC_GO_CLIENT=1 00:11:44.400 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:44.400 RUN_NIGHTLY=0 14:58:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:44.400 14:58:59 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:11:44.400 14:58:59 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.400 14:58:59 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.400 14:58:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.400 14:58:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.400 14:58:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.400 14:58:59 -- paths/export.sh@5 -- $ export PATH 00:11:44.400 14:58:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.400 14:58:59 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:11:44.400 14:58:59 -- common/autobuild_common.sh@435 -- $ date +%s 00:11:44.400 14:58:59 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713452339.XXXXXX 00:11:44.400 14:58:59 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713452339.yq53xg 00:11:44.400 14:58:59 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:11:44.400 14:58:59 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:11:44.400 14:58:59 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:11:44.400 14:58:59 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:11:44.400 14:58:59 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:11:44.400 14:58:59 -- common/autobuild_common.sh@451 -- $ get_config_params 00:11:44.400 14:59:00 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:11:44.400 14:59:00 -- common/autotest_common.sh@10 -- $ set +x 00:11:44.400 14:59:00 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:11:44.400 14:59:00 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:11:44.400 14:59:00 -- pm/common@17 -- $ local monitor 00:11:44.400 14:59:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:44.400 14:59:00 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5139 00:11:44.400 14:59:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:44.400 14:59:00 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5141 00:11:44.400 14:59:00 -- pm/common@26 -- $ sleep 1 00:11:44.400 14:59:00 -- pm/common@21 -- $ date +%s 00:11:44.400 14:59:00 -- pm/common@21 -- $ date +%s 00:11:44.400 14:59:00 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713452340 00:11:44.400 14:59:00 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713452340 00:11:44.658 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713452340_collect-cpu-load.pm.log 00:11:44.658 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713452340_collect-vmstat.pm.log 00:11:45.623 14:59:01 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:11:45.623 14:59:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:11:45.623 14:59:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:11:45.623 14:59:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:11:45.623 14:59:01 -- spdk/autobuild.sh@16 -- $ date -u 00:11:45.623 Thu Apr 18 02:59:01 PM UTC 2024 00:11:45.623 14:59:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:11:45.623 v24.05-pre-417-gce34c7fd8 00:11:45.623 14:59:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:11:45.623 14:59:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:11:45.623 14:59:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:11:45.623 14:59:01 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:11:45.623 14:59:01 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:11:45.623 14:59:01 -- common/autotest_common.sh@10 -- $ set +x 00:11:45.623 ************************************ 00:11:45.623 START TEST ubsan 00:11:45.623 ************************************ 00:11:45.623 using ubsan 00:11:45.623 14:59:01 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:11:45.623 00:11:45.623 real 0m0.000s 00:11:45.623 user 0m0.000s 00:11:45.623 sys 0m0.000s 00:11:45.623 14:59:01 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:11:45.623 14:59:01 -- common/autotest_common.sh@10 -- $ set +x 00:11:45.623 ************************************ 00:11:45.623 END TEST ubsan 00:11:45.623 ************************************ 00:11:45.623 14:59:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:11:45.623 14:59:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:11:45.623 14:59:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:11:45.623 14:59:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:11:45.623 14:59:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:11:45.623 14:59:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:11:45.623 14:59:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:11:45.623 14:59:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:11:45.623 14:59:01 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:11:45.623 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:45.623 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:46.222 Using 'verbs' RDMA provider 00:12:01.750 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:12:16.644 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:12:16.644 go version go1.21.1 linux/amd64 00:12:16.644 Creating mk/config.mk...done. 00:12:16.644 Creating mk/cc.flags.mk...done. 00:12:16.644 Type 'make' to build. 00:12:16.644 14:59:30 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:12:16.644 14:59:30 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:12:16.644 14:59:30 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:12:16.644 14:59:30 -- common/autotest_common.sh@10 -- $ set +x 00:12:16.644 ************************************ 00:12:16.644 START TEST make 00:12:16.644 ************************************ 00:12:16.644 14:59:30 -- common/autotest_common.sh@1111 -- $ make -j10 00:12:16.644 make[1]: Nothing to be done for 'all'. 00:12:28.844 The Meson build system 00:12:28.844 Version: 1.3.1 00:12:28.844 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:12:28.844 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:12:28.844 Build type: native build 00:12:28.844 Program cat found: YES (/usr/bin/cat) 00:12:28.844 Project name: DPDK 00:12:28.844 Project version: 23.11.0 00:12:28.844 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:12:28.844 C linker for the host machine: cc ld.bfd 2.39-16 00:12:28.844 Host machine cpu family: x86_64 00:12:28.844 Host machine cpu: x86_64 00:12:28.844 Message: ## Building in Developer Mode ## 00:12:28.844 Program pkg-config found: YES (/usr/bin/pkg-config) 00:12:28.844 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:12:28.844 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:12:28.844 Program python3 found: YES (/usr/bin/python3) 00:12:28.844 Program cat found: YES (/usr/bin/cat) 00:12:28.844 Compiler for C supports arguments -march=native: YES 00:12:28.844 Checking for size of "void *" : 8 00:12:28.844 Checking for size of "void *" : 8 (cached) 00:12:28.844 Library m found: YES 00:12:28.844 Library numa found: YES 00:12:28.844 Has header "numaif.h" : YES 00:12:28.844 Library fdt found: NO 00:12:28.844 Library execinfo found: NO 00:12:28.844 Has header "execinfo.h" : YES 00:12:28.844 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:12:28.844 Run-time dependency libarchive found: NO (tried pkgconfig) 00:12:28.844 Run-time dependency libbsd found: NO (tried pkgconfig) 00:12:28.844 Run-time dependency jansson found: NO (tried pkgconfig) 00:12:28.844 Run-time dependency openssl found: YES 3.0.9 00:12:28.844 Run-time dependency libpcap found: YES 1.10.4 00:12:28.844 Has header "pcap.h" with dependency libpcap: YES 00:12:28.844 Compiler for C supports arguments -Wcast-qual: YES 00:12:28.844 Compiler for C supports arguments -Wdeprecated: YES 00:12:28.844 Compiler for C supports arguments -Wformat: YES 00:12:28.844 Compiler for C supports arguments -Wformat-nonliteral: NO 00:12:28.844 Compiler for C supports arguments -Wformat-security: NO 00:12:28.844 Compiler for C supports arguments -Wmissing-declarations: YES 00:12:28.844 Compiler for C supports arguments -Wmissing-prototypes: YES 00:12:28.844 Compiler for C supports arguments -Wnested-externs: YES 00:12:28.844 Compiler for C supports arguments -Wold-style-definition: YES 00:12:28.844 Compiler for C supports arguments -Wpointer-arith: YES 00:12:28.844 Compiler for C supports arguments -Wsign-compare: YES 00:12:28.844 Compiler for C supports arguments -Wstrict-prototypes: YES 00:12:28.844 Compiler for C supports arguments -Wundef: YES 00:12:28.844 Compiler for C supports arguments -Wwrite-strings: YES 00:12:28.844 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:12:28.844 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:12:28.844 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:12:28.844 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:12:28.844 Program objdump found: YES (/usr/bin/objdump) 00:12:28.844 Compiler for C supports arguments -mavx512f: YES 00:12:28.844 Checking if "AVX512 checking" compiles: YES 00:12:28.844 Fetching value of define "__SSE4_2__" : 1 00:12:28.844 Fetching value of define "__AES__" : 1 00:12:28.844 Fetching value of define "__AVX__" : 1 00:12:28.844 Fetching value of define "__AVX2__" : 1 00:12:28.844 Fetching value of define "__AVX512BW__" : 1 00:12:28.844 Fetching value of define "__AVX512CD__" : 1 00:12:28.844 Fetching value of define "__AVX512DQ__" : 1 00:12:28.844 Fetching value of define "__AVX512F__" : 1 00:12:28.844 Fetching value of define "__AVX512VL__" : 1 00:12:28.844 Fetching value of define "__PCLMUL__" : 1 00:12:28.844 Fetching value of define "__RDRND__" : 1 00:12:28.844 Fetching value of define "__RDSEED__" : 1 00:12:28.844 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:12:28.844 Fetching value of define "__znver1__" : (undefined) 00:12:28.844 Fetching value of define "__znver2__" : (undefined) 00:12:28.844 Fetching value of define "__znver3__" : (undefined) 00:12:28.844 Fetching value of define "__znver4__" : (undefined) 00:12:28.844 Compiler for C supports arguments -Wno-format-truncation: YES 00:12:28.844 Message: lib/log: Defining dependency "log" 00:12:28.844 Message: lib/kvargs: Defining dependency "kvargs" 00:12:28.844 Message: lib/telemetry: Defining dependency "telemetry" 00:12:28.844 Checking for function "getentropy" : NO 00:12:28.844 Message: lib/eal: Defining dependency "eal" 00:12:28.844 Message: lib/ring: Defining dependency "ring" 00:12:28.844 Message: lib/rcu: Defining dependency "rcu" 00:12:28.844 Message: lib/mempool: Defining dependency "mempool" 00:12:28.844 Message: lib/mbuf: Defining dependency "mbuf" 00:12:28.844 Fetching value of define "__PCLMUL__" : 1 (cached) 00:12:28.844 Fetching value of define "__AVX512F__" : 1 (cached) 00:12:28.844 Fetching value of define "__AVX512BW__" : 1 (cached) 00:12:28.844 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:12:28.844 Fetching value of define "__AVX512VL__" : 1 (cached) 00:12:28.844 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:12:28.844 Compiler for C supports arguments -mpclmul: YES 00:12:28.844 Compiler for C supports arguments -maes: YES 00:12:28.844 Compiler for C supports arguments -mavx512f: YES (cached) 00:12:28.844 Compiler for C supports arguments -mavx512bw: YES 00:12:28.844 Compiler for C supports arguments -mavx512dq: YES 00:12:28.844 Compiler for C supports arguments -mavx512vl: YES 00:12:28.844 Compiler for C supports arguments -mvpclmulqdq: YES 00:12:28.844 Compiler for C supports arguments -mavx2: YES 00:12:28.844 Compiler for C supports arguments -mavx: YES 00:12:28.844 Message: lib/net: Defining dependency "net" 00:12:28.844 Message: lib/meter: Defining dependency "meter" 00:12:28.844 Message: lib/ethdev: Defining dependency "ethdev" 00:12:28.844 Message: lib/pci: Defining dependency "pci" 00:12:28.844 Message: lib/cmdline: Defining dependency "cmdline" 00:12:28.844 Message: lib/hash: Defining dependency "hash" 00:12:28.844 Message: lib/timer: Defining dependency "timer" 00:12:28.844 Message: lib/compressdev: Defining dependency "compressdev" 00:12:28.844 Message: lib/cryptodev: Defining dependency "cryptodev" 00:12:28.844 Message: lib/dmadev: Defining dependency "dmadev" 00:12:28.844 Compiler for C supports arguments -Wno-cast-qual: YES 00:12:28.844 Message: lib/power: Defining dependency "power" 00:12:28.844 Message: lib/reorder: Defining dependency "reorder" 00:12:28.844 Message: lib/security: Defining dependency "security" 00:12:28.844 Has header "linux/userfaultfd.h" : YES 00:12:28.845 Has header "linux/vduse.h" : YES 00:12:28.845 Message: lib/vhost: Defining dependency "vhost" 00:12:28.845 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:12:28.845 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:12:28.845 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:12:28.845 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:12:28.845 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:12:28.845 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:12:28.845 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:12:28.845 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:12:28.845 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:12:28.845 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:12:28.845 Program doxygen found: YES (/usr/bin/doxygen) 00:12:28.845 Configuring doxy-api-html.conf using configuration 00:12:28.845 Configuring doxy-api-man.conf using configuration 00:12:28.845 Program mandb found: YES (/usr/bin/mandb) 00:12:28.845 Program sphinx-build found: NO 00:12:28.845 Configuring rte_build_config.h using configuration 00:12:28.845 Message: 00:12:28.845 ================= 00:12:28.845 Applications Enabled 00:12:28.845 ================= 00:12:28.845 00:12:28.845 apps: 00:12:28.845 00:12:28.845 00:12:28.845 Message: 00:12:28.845 ================= 00:12:28.845 Libraries Enabled 00:12:28.845 ================= 00:12:28.845 00:12:28.845 libs: 00:12:28.845 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:12:28.845 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:12:28.845 cryptodev, dmadev, power, reorder, security, vhost, 00:12:28.845 00:12:28.845 Message: 00:12:28.845 =============== 00:12:28.845 Drivers Enabled 00:12:28.845 =============== 00:12:28.845 00:12:28.845 common: 00:12:28.845 00:12:28.845 bus: 00:12:28.845 pci, vdev, 00:12:28.845 mempool: 00:12:28.845 ring, 00:12:28.845 dma: 00:12:28.845 00:12:28.845 net: 00:12:28.845 00:12:28.845 crypto: 00:12:28.845 00:12:28.845 compress: 00:12:28.845 00:12:28.845 vdpa: 00:12:28.845 00:12:28.845 00:12:28.845 Message: 00:12:28.845 ================= 00:12:28.845 Content Skipped 00:12:28.845 ================= 00:12:28.845 00:12:28.845 apps: 00:12:28.845 dumpcap: explicitly disabled via build config 00:12:28.845 graph: explicitly disabled via build config 00:12:28.845 pdump: explicitly disabled via build config 00:12:28.845 proc-info: explicitly disabled via build config 00:12:28.845 test-acl: explicitly disabled via build config 00:12:28.845 test-bbdev: explicitly disabled via build config 00:12:28.845 test-cmdline: explicitly disabled via build config 00:12:28.845 test-compress-perf: explicitly disabled via build config 00:12:28.845 test-crypto-perf: explicitly disabled via build config 00:12:28.845 test-dma-perf: explicitly disabled via build config 00:12:28.845 test-eventdev: explicitly disabled via build config 00:12:28.845 test-fib: explicitly disabled via build config 00:12:28.845 test-flow-perf: explicitly disabled via build config 00:12:28.845 test-gpudev: explicitly disabled via build config 00:12:28.845 test-mldev: explicitly disabled via build config 00:12:28.845 test-pipeline: explicitly disabled via build config 00:12:28.845 test-pmd: explicitly disabled via build config 00:12:28.845 test-regex: explicitly disabled via build config 00:12:28.845 test-sad: explicitly disabled via build config 00:12:28.845 test-security-perf: explicitly disabled via build config 00:12:28.845 00:12:28.845 libs: 00:12:28.845 metrics: explicitly disabled via build config 00:12:28.845 acl: explicitly disabled via build config 00:12:28.845 bbdev: explicitly disabled via build config 00:12:28.845 bitratestats: explicitly disabled via build config 00:12:28.845 bpf: explicitly disabled via build config 00:12:28.845 cfgfile: explicitly disabled via build config 00:12:28.845 distributor: explicitly disabled via build config 00:12:28.845 efd: explicitly disabled via build config 00:12:28.845 eventdev: explicitly disabled via build config 00:12:28.845 dispatcher: explicitly disabled via build config 00:12:28.845 gpudev: explicitly disabled via build config 00:12:28.845 gro: explicitly disabled via build config 00:12:28.845 gso: explicitly disabled via build config 00:12:28.845 ip_frag: explicitly disabled via build config 00:12:28.845 jobstats: explicitly disabled via build config 00:12:28.845 latencystats: explicitly disabled via build config 00:12:28.845 lpm: explicitly disabled via build config 00:12:28.845 member: explicitly disabled via build config 00:12:28.845 pcapng: explicitly disabled via build config 00:12:28.845 rawdev: explicitly disabled via build config 00:12:28.845 regexdev: explicitly disabled via build config 00:12:28.845 mldev: explicitly disabled via build config 00:12:28.845 rib: explicitly disabled via build config 00:12:28.845 sched: explicitly disabled via build config 00:12:28.845 stack: explicitly disabled via build config 00:12:28.845 ipsec: explicitly disabled via build config 00:12:28.845 pdcp: explicitly disabled via build config 00:12:28.845 fib: explicitly disabled via build config 00:12:28.845 port: explicitly disabled via build config 00:12:28.845 pdump: explicitly disabled via build config 00:12:28.845 table: explicitly disabled via build config 00:12:28.845 pipeline: explicitly disabled via build config 00:12:28.845 graph: explicitly disabled via build config 00:12:28.845 node: explicitly disabled via build config 00:12:28.845 00:12:28.845 drivers: 00:12:28.845 common/cpt: not in enabled drivers build config 00:12:28.845 common/dpaax: not in enabled drivers build config 00:12:28.845 common/iavf: not in enabled drivers build config 00:12:28.845 common/idpf: not in enabled drivers build config 00:12:28.845 common/mvep: not in enabled drivers build config 00:12:28.845 common/octeontx: not in enabled drivers build config 00:12:28.845 bus/auxiliary: not in enabled drivers build config 00:12:28.845 bus/cdx: not in enabled drivers build config 00:12:28.845 bus/dpaa: not in enabled drivers build config 00:12:28.845 bus/fslmc: not in enabled drivers build config 00:12:28.845 bus/ifpga: not in enabled drivers build config 00:12:28.845 bus/platform: not in enabled drivers build config 00:12:28.845 bus/vmbus: not in enabled drivers build config 00:12:28.845 common/cnxk: not in enabled drivers build config 00:12:28.845 common/mlx5: not in enabled drivers build config 00:12:28.845 common/nfp: not in enabled drivers build config 00:12:28.845 common/qat: not in enabled drivers build config 00:12:28.845 common/sfc_efx: not in enabled drivers build config 00:12:28.845 mempool/bucket: not in enabled drivers build config 00:12:28.845 mempool/cnxk: not in enabled drivers build config 00:12:28.845 mempool/dpaa: not in enabled drivers build config 00:12:28.845 mempool/dpaa2: not in enabled drivers build config 00:12:28.845 mempool/octeontx: not in enabled drivers build config 00:12:28.845 mempool/stack: not in enabled drivers build config 00:12:28.845 dma/cnxk: not in enabled drivers build config 00:12:28.845 dma/dpaa: not in enabled drivers build config 00:12:28.845 dma/dpaa2: not in enabled drivers build config 00:12:28.845 dma/hisilicon: not in enabled drivers build config 00:12:28.845 dma/idxd: not in enabled drivers build config 00:12:28.845 dma/ioat: not in enabled drivers build config 00:12:28.845 dma/skeleton: not in enabled drivers build config 00:12:28.845 net/af_packet: not in enabled drivers build config 00:12:28.845 net/af_xdp: not in enabled drivers build config 00:12:28.845 net/ark: not in enabled drivers build config 00:12:28.845 net/atlantic: not in enabled drivers build config 00:12:28.845 net/avp: not in enabled drivers build config 00:12:28.845 net/axgbe: not in enabled drivers build config 00:12:28.845 net/bnx2x: not in enabled drivers build config 00:12:28.845 net/bnxt: not in enabled drivers build config 00:12:28.845 net/bonding: not in enabled drivers build config 00:12:28.845 net/cnxk: not in enabled drivers build config 00:12:28.845 net/cpfl: not in enabled drivers build config 00:12:28.845 net/cxgbe: not in enabled drivers build config 00:12:28.845 net/dpaa: not in enabled drivers build config 00:12:28.845 net/dpaa2: not in enabled drivers build config 00:12:28.845 net/e1000: not in enabled drivers build config 00:12:28.845 net/ena: not in enabled drivers build config 00:12:28.845 net/enetc: not in enabled drivers build config 00:12:28.845 net/enetfec: not in enabled drivers build config 00:12:28.845 net/enic: not in enabled drivers build config 00:12:28.845 net/failsafe: not in enabled drivers build config 00:12:28.845 net/fm10k: not in enabled drivers build config 00:12:28.845 net/gve: not in enabled drivers build config 00:12:28.845 net/hinic: not in enabled drivers build config 00:12:28.845 net/hns3: not in enabled drivers build config 00:12:28.845 net/i40e: not in enabled drivers build config 00:12:28.845 net/iavf: not in enabled drivers build config 00:12:28.845 net/ice: not in enabled drivers build config 00:12:28.845 net/idpf: not in enabled drivers build config 00:12:28.845 net/igc: not in enabled drivers build config 00:12:28.845 net/ionic: not in enabled drivers build config 00:12:28.845 net/ipn3ke: not in enabled drivers build config 00:12:28.845 net/ixgbe: not in enabled drivers build config 00:12:28.845 net/mana: not in enabled drivers build config 00:12:28.845 net/memif: not in enabled drivers build config 00:12:28.845 net/mlx4: not in enabled drivers build config 00:12:28.845 net/mlx5: not in enabled drivers build config 00:12:28.845 net/mvneta: not in enabled drivers build config 00:12:28.845 net/mvpp2: not in enabled drivers build config 00:12:28.845 net/netvsc: not in enabled drivers build config 00:12:28.845 net/nfb: not in enabled drivers build config 00:12:28.845 net/nfp: not in enabled drivers build config 00:12:28.845 net/ngbe: not in enabled drivers build config 00:12:28.845 net/null: not in enabled drivers build config 00:12:28.846 net/octeontx: not in enabled drivers build config 00:12:28.846 net/octeon_ep: not in enabled drivers build config 00:12:28.846 net/pcap: not in enabled drivers build config 00:12:28.846 net/pfe: not in enabled drivers build config 00:12:28.846 net/qede: not in enabled drivers build config 00:12:28.846 net/ring: not in enabled drivers build config 00:12:28.846 net/sfc: not in enabled drivers build config 00:12:28.846 net/softnic: not in enabled drivers build config 00:12:28.846 net/tap: not in enabled drivers build config 00:12:28.846 net/thunderx: not in enabled drivers build config 00:12:28.846 net/txgbe: not in enabled drivers build config 00:12:28.846 net/vdev_netvsc: not in enabled drivers build config 00:12:28.846 net/vhost: not in enabled drivers build config 00:12:28.846 net/virtio: not in enabled drivers build config 00:12:28.846 net/vmxnet3: not in enabled drivers build config 00:12:28.846 raw/*: missing internal dependency, "rawdev" 00:12:28.846 crypto/armv8: not in enabled drivers build config 00:12:28.846 crypto/bcmfs: not in enabled drivers build config 00:12:28.846 crypto/caam_jr: not in enabled drivers build config 00:12:28.846 crypto/ccp: not in enabled drivers build config 00:12:28.846 crypto/cnxk: not in enabled drivers build config 00:12:28.846 crypto/dpaa_sec: not in enabled drivers build config 00:12:28.846 crypto/dpaa2_sec: not in enabled drivers build config 00:12:28.846 crypto/ipsec_mb: not in enabled drivers build config 00:12:28.846 crypto/mlx5: not in enabled drivers build config 00:12:28.846 crypto/mvsam: not in enabled drivers build config 00:12:28.846 crypto/nitrox: not in enabled drivers build config 00:12:28.846 crypto/null: not in enabled drivers build config 00:12:28.846 crypto/octeontx: not in enabled drivers build config 00:12:28.846 crypto/openssl: not in enabled drivers build config 00:12:28.846 crypto/scheduler: not in enabled drivers build config 00:12:28.846 crypto/uadk: not in enabled drivers build config 00:12:28.846 crypto/virtio: not in enabled drivers build config 00:12:28.846 compress/isal: not in enabled drivers build config 00:12:28.846 compress/mlx5: not in enabled drivers build config 00:12:28.846 compress/octeontx: not in enabled drivers build config 00:12:28.846 compress/zlib: not in enabled drivers build config 00:12:28.846 regex/*: missing internal dependency, "regexdev" 00:12:28.846 ml/*: missing internal dependency, "mldev" 00:12:28.846 vdpa/ifc: not in enabled drivers build config 00:12:28.846 vdpa/mlx5: not in enabled drivers build config 00:12:28.846 vdpa/nfp: not in enabled drivers build config 00:12:28.846 vdpa/sfc: not in enabled drivers build config 00:12:28.846 event/*: missing internal dependency, "eventdev" 00:12:28.846 baseband/*: missing internal dependency, "bbdev" 00:12:28.846 gpu/*: missing internal dependency, "gpudev" 00:12:28.846 00:12:28.846 00:12:28.846 Build targets in project: 85 00:12:28.846 00:12:28.846 DPDK 23.11.0 00:12:28.846 00:12:28.846 User defined options 00:12:28.846 buildtype : debug 00:12:28.846 default_library : shared 00:12:28.846 libdir : lib 00:12:28.846 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:28.846 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:12:28.846 c_link_args : 00:12:28.846 cpu_instruction_set: native 00:12:28.846 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:12:28.846 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:12:28.846 enable_docs : false 00:12:28.846 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:12:28.846 enable_kmods : false 00:12:28.846 tests : false 00:12:28.846 00:12:28.846 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:12:28.846 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:12:28.846 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:12:28.846 [2/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:12:28.846 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:12:28.846 [4/265] Linking static target lib/librte_log.a 00:12:28.846 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:12:28.846 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:12:28.846 [7/265] Linking static target lib/librte_kvargs.a 00:12:28.846 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:12:28.846 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:12:28.846 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:12:28.846 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:12:28.846 [12/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:12:28.846 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:12:28.846 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:12:28.846 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:12:28.846 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:12:28.846 [17/265] Linking static target lib/librte_telemetry.a 00:12:28.846 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:12:28.846 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:12:28.846 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:12:28.846 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:12:28.846 [22/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:12:28.846 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:12:28.846 [24/265] Linking target lib/librte_log.so.24.0 00:12:28.846 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:12:29.106 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:12:29.106 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:12:29.106 [28/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:12:29.106 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:12:29.365 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:12:29.365 [31/265] Linking target lib/librte_kvargs.so.24.0 00:12:29.365 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:12:29.365 [33/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:12:29.365 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:12:29.365 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:12:29.365 [36/265] Linking target lib/librte_telemetry.so.24.0 00:12:29.624 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:12:29.625 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:12:29.625 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:12:29.625 [40/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:12:29.625 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:12:29.625 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:12:29.625 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:12:29.625 [44/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:12:29.625 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:12:29.883 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:12:30.142 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:12:30.142 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:12:30.142 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:12:30.402 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:12:30.402 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:12:30.402 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:12:30.402 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:12:30.402 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:12:30.402 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:12:30.402 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:12:30.402 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:12:30.402 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:12:30.668 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:12:30.668 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:12:30.668 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:12:30.946 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:12:30.946 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:12:30.946 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:12:30.946 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:12:30.946 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:12:30.946 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:12:30.946 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:12:31.205 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:12:31.205 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:12:31.205 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:12:31.464 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:12:31.464 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:12:31.464 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:12:31.464 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:12:31.464 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:12:31.464 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:12:31.464 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:12:31.464 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:12:31.464 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:12:31.723 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:12:31.723 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:12:31.981 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:12:31.981 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:12:31.981 [85/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:12:31.981 [86/265] Linking static target lib/librte_ring.a 00:12:31.981 [87/265] Linking static target lib/librte_eal.a 00:12:32.240 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:12:32.240 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:12:32.240 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:12:32.240 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:12:32.240 [92/265] Linking static target lib/librte_mempool.a 00:12:32.499 [93/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:12:32.499 [94/265] Linking static target lib/librte_rcu.a 00:12:32.499 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:12:32.499 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:12:32.759 [97/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:12:32.759 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:12:33.019 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:12:33.019 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:12:33.019 [101/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:12:33.019 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:12:33.019 [103/265] Linking static target lib/librte_mbuf.a 00:12:33.019 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:12:33.019 [105/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:12:33.277 [106/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:12:33.277 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:12:33.277 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:12:33.277 [109/265] Linking static target lib/librte_net.a 00:12:33.535 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:12:33.535 [111/265] Linking static target lib/librte_meter.a 00:12:33.535 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:12:33.535 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:12:33.535 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:12:33.794 [115/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:12:33.794 [116/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:12:33.794 [117/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:12:33.794 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:12:34.054 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:12:34.054 [120/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:12:34.313 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:12:34.313 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:12:34.572 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:12:34.572 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:12:34.572 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:12:34.572 [126/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:12:34.572 [127/265] Linking static target lib/librte_pci.a 00:12:34.832 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:12:34.832 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:12:34.832 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:12:34.832 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:12:34.832 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:12:34.832 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:12:34.832 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:12:34.832 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:12:34.832 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:12:34.832 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:12:35.091 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:12:35.091 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:12:35.091 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:12:35.091 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:12:35.091 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:12:35.091 [143/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:12:35.091 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:12:35.091 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:12:35.351 [146/265] Linking static target lib/librte_cmdline.a 00:12:35.351 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:12:35.351 [148/265] Linking static target lib/librte_ethdev.a 00:12:35.610 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:12:35.610 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:12:35.610 [151/265] Linking static target lib/librte_timer.a 00:12:35.610 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:12:35.610 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:12:35.869 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:12:35.869 [155/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:12:35.869 [156/265] Linking static target lib/librte_compressdev.a 00:12:35.869 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:12:36.129 [158/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:12:36.129 [159/265] Linking static target lib/librte_hash.a 00:12:36.129 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:12:36.129 [161/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:12:36.391 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:12:36.391 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:12:36.391 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:12:36.391 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:12:36.391 [166/265] Linking static target lib/librte_dmadev.a 00:12:36.650 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:12:36.650 [168/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:12:36.650 [169/265] Linking static target lib/librte_cryptodev.a 00:12:36.650 [170/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:12:36.650 [171/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:12:36.651 [172/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:36.910 [173/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:12:36.910 [174/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:12:37.169 [175/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:37.169 [176/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:12:37.169 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:12:37.169 [178/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:12:37.169 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:12:37.169 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:12:37.428 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:12:37.428 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:12:37.428 [183/265] Linking static target lib/librte_power.a 00:12:37.687 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:12:37.687 [185/265] Linking static target lib/librte_reorder.a 00:12:37.687 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:12:37.687 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:12:37.687 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:12:37.687 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:12:37.946 [190/265] Linking static target lib/librte_security.a 00:12:38.204 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:12:38.204 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:12:38.462 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:12:38.721 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:12:38.721 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:12:38.721 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:12:38.721 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:12:38.721 [198/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:12:38.980 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:12:39.238 [200/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:39.238 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:12:39.238 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:12:39.238 [203/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:12:39.497 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:12:39.497 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:12:39.497 [206/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:12:39.497 [207/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:12:39.497 [208/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:12:39.756 [209/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:12:39.756 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:12:39.756 [211/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:12:39.756 [212/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:12:39.756 [213/265] Linking static target drivers/librte_bus_pci.a 00:12:39.756 [214/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:12:39.756 [215/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:12:39.756 [216/265] Linking static target drivers/librte_bus_vdev.a 00:12:39.756 [217/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:12:39.756 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:12:39.756 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:12:40.016 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:40.016 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:12:40.016 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:12:40.016 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:12:40.016 [224/265] Linking static target drivers/librte_mempool_ring.a 00:12:40.275 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:12:40.845 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:12:40.845 [227/265] Linking static target lib/librte_vhost.a 00:12:43.379 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:12:45.295 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:45.860 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:12:45.860 [231/265] Linking target lib/librte_eal.so.24.0 00:12:46.117 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:12:46.117 [233/265] Linking target lib/librte_ring.so.24.0 00:12:46.117 [234/265] Linking target lib/librte_meter.so.24.0 00:12:46.117 [235/265] Linking target lib/librte_timer.so.24.0 00:12:46.117 [236/265] Linking target lib/librte_pci.so.24.0 00:12:46.117 [237/265] Linking target drivers/librte_bus_vdev.so.24.0 00:12:46.117 [238/265] Linking target lib/librte_dmadev.so.24.0 00:12:46.375 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:12:46.375 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:12:46.375 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:12:46.375 [242/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:12:46.375 [243/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:12:46.375 [244/265] Linking target lib/librte_mempool.so.24.0 00:12:46.375 [245/265] Linking target lib/librte_rcu.so.24.0 00:12:46.375 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:12:46.630 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:12:46.630 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:12:46.630 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:12:46.630 [250/265] Linking target lib/librte_mbuf.so.24.0 00:12:46.887 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:12:46.887 [252/265] Linking target lib/librte_compressdev.so.24.0 00:12:46.887 [253/265] Linking target lib/librte_reorder.so.24.0 00:12:46.887 [254/265] Linking target lib/librte_net.so.24.0 00:12:46.887 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:12:46.887 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:12:47.145 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:12:47.145 [258/265] Linking target lib/librte_cmdline.so.24.0 00:12:47.145 [259/265] Linking target lib/librte_hash.so.24.0 00:12:47.145 [260/265] Linking target lib/librte_ethdev.so.24.0 00:12:47.145 [261/265] Linking target lib/librte_security.so.24.0 00:12:47.145 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:12:47.145 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:12:47.402 [264/265] Linking target lib/librte_power.so.24.0 00:12:47.402 [265/265] Linking target lib/librte_vhost.so.24.0 00:12:47.402 INFO: autodetecting backend as ninja 00:12:47.402 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:12:48.775 CC lib/ut_mock/mock.o 00:12:48.775 CC lib/log/log.o 00:12:48.775 CC lib/log/log_flags.o 00:12:48.775 CC lib/ut/ut.o 00:12:48.775 CC lib/log/log_deprecated.o 00:12:48.775 LIB libspdk_ut_mock.a 00:12:49.033 SO libspdk_ut_mock.so.6.0 00:12:49.033 LIB libspdk_ut.a 00:12:49.033 LIB libspdk_log.a 00:12:49.033 SO libspdk_ut.so.2.0 00:12:49.033 SYMLINK libspdk_ut_mock.so 00:12:49.033 SO libspdk_log.so.7.0 00:12:49.033 SYMLINK libspdk_ut.so 00:12:49.033 SYMLINK libspdk_log.so 00:12:49.600 CXX lib/trace_parser/trace.o 00:12:49.600 CC lib/util/base64.o 00:12:49.600 CC lib/util/crc16.o 00:12:49.600 CC lib/util/bit_array.o 00:12:49.600 CC lib/util/cpuset.o 00:12:49.600 CC lib/util/crc32.o 00:12:49.600 CC lib/util/crc32c.o 00:12:49.600 CC lib/ioat/ioat.o 00:12:49.600 CC lib/dma/dma.o 00:12:49.600 CC lib/util/crc32_ieee.o 00:12:49.600 CC lib/vfio_user/host/vfio_user_pci.o 00:12:49.600 CC lib/util/crc64.o 00:12:49.600 CC lib/vfio_user/host/vfio_user.o 00:12:49.600 CC lib/util/dif.o 00:12:49.600 CC lib/util/fd.o 00:12:49.600 LIB libspdk_dma.a 00:12:49.600 SO libspdk_dma.so.4.0 00:12:49.600 CC lib/util/file.o 00:12:49.859 CC lib/util/hexlify.o 00:12:49.859 LIB libspdk_ioat.a 00:12:49.859 SYMLINK libspdk_dma.so 00:12:49.859 CC lib/util/iov.o 00:12:49.859 CC lib/util/math.o 00:12:49.859 CC lib/util/pipe.o 00:12:49.859 SO libspdk_ioat.so.7.0 00:12:49.859 CC lib/util/strerror_tls.o 00:12:49.859 LIB libspdk_vfio_user.a 00:12:49.859 SYMLINK libspdk_ioat.so 00:12:49.859 CC lib/util/string.o 00:12:49.859 CC lib/util/uuid.o 00:12:49.859 CC lib/util/fd_group.o 00:12:49.859 SO libspdk_vfio_user.so.5.0 00:12:49.859 CC lib/util/xor.o 00:12:49.859 SYMLINK libspdk_vfio_user.so 00:12:49.859 CC lib/util/zipf.o 00:12:50.117 LIB libspdk_util.a 00:12:50.376 SO libspdk_util.so.9.0 00:12:50.376 LIB libspdk_trace_parser.a 00:12:50.376 SYMLINK libspdk_util.so 00:12:50.376 SO libspdk_trace_parser.so.5.0 00:12:50.635 SYMLINK libspdk_trace_parser.so 00:12:50.635 CC lib/env_dpdk/env.o 00:12:50.635 CC lib/env_dpdk/memory.o 00:12:50.635 CC lib/env_dpdk/pci.o 00:12:50.635 CC lib/env_dpdk/threads.o 00:12:50.635 CC lib/env_dpdk/init.o 00:12:50.635 CC lib/idxd/idxd.o 00:12:50.635 CC lib/vmd/vmd.o 00:12:50.635 CC lib/rdma/common.o 00:12:50.635 CC lib/json/json_parse.o 00:12:50.635 CC lib/conf/conf.o 00:12:50.894 CC lib/env_dpdk/pci_ioat.o 00:12:50.894 LIB libspdk_conf.a 00:12:50.894 CC lib/json/json_util.o 00:12:50.894 CC lib/vmd/led.o 00:12:50.894 SO libspdk_conf.so.6.0 00:12:50.894 CC lib/rdma/rdma_verbs.o 00:12:50.894 SYMLINK libspdk_conf.so 00:12:50.894 CC lib/json/json_write.o 00:12:51.152 CC lib/env_dpdk/pci_virtio.o 00:12:51.153 CC lib/env_dpdk/pci_vmd.o 00:12:51.153 CC lib/env_dpdk/pci_idxd.o 00:12:51.153 CC lib/env_dpdk/pci_event.o 00:12:51.153 CC lib/idxd/idxd_user.o 00:12:51.153 CC lib/env_dpdk/sigbus_handler.o 00:12:51.153 CC lib/env_dpdk/pci_dpdk.o 00:12:51.153 LIB libspdk_rdma.a 00:12:51.153 CC lib/env_dpdk/pci_dpdk_2207.o 00:12:51.153 SO libspdk_rdma.so.6.0 00:12:51.153 LIB libspdk_vmd.a 00:12:51.153 CC lib/env_dpdk/pci_dpdk_2211.o 00:12:51.412 LIB libspdk_json.a 00:12:51.412 SO libspdk_vmd.so.6.0 00:12:51.412 SYMLINK libspdk_rdma.so 00:12:51.412 SO libspdk_json.so.6.0 00:12:51.412 LIB libspdk_idxd.a 00:12:51.412 SYMLINK libspdk_vmd.so 00:12:51.412 SO libspdk_idxd.so.12.0 00:12:51.412 SYMLINK libspdk_json.so 00:12:51.412 SYMLINK libspdk_idxd.so 00:12:51.980 CC lib/jsonrpc/jsonrpc_server.o 00:12:51.980 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:12:51.980 CC lib/jsonrpc/jsonrpc_client.o 00:12:51.980 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:12:51.980 LIB libspdk_env_dpdk.a 00:12:52.239 SO libspdk_env_dpdk.so.14.0 00:12:52.239 LIB libspdk_jsonrpc.a 00:12:52.239 SO libspdk_jsonrpc.so.6.0 00:12:52.239 SYMLINK libspdk_jsonrpc.so 00:12:52.239 SYMLINK libspdk_env_dpdk.so 00:12:52.812 CC lib/rpc/rpc.o 00:12:53.080 LIB libspdk_rpc.a 00:12:53.080 SO libspdk_rpc.so.6.0 00:12:53.080 SYMLINK libspdk_rpc.so 00:12:53.667 CC lib/trace/trace.o 00:12:53.667 CC lib/trace/trace_rpc.o 00:12:53.667 CC lib/trace/trace_flags.o 00:12:53.667 CC lib/keyring/keyring.o 00:12:53.667 CC lib/keyring/keyring_rpc.o 00:12:53.667 CC lib/notify/notify_rpc.o 00:12:53.667 CC lib/notify/notify.o 00:12:53.667 LIB libspdk_notify.a 00:12:53.667 SO libspdk_notify.so.6.0 00:12:53.668 LIB libspdk_trace.a 00:12:53.941 LIB libspdk_keyring.a 00:12:53.942 SYMLINK libspdk_notify.so 00:12:53.942 SO libspdk_trace.so.10.0 00:12:53.942 SO libspdk_keyring.so.1.0 00:12:53.942 SYMLINK libspdk_trace.so 00:12:53.942 SYMLINK libspdk_keyring.so 00:12:54.229 CC lib/sock/sock_rpc.o 00:12:54.229 CC lib/sock/sock.o 00:12:54.229 CC lib/thread/thread.o 00:12:54.229 CC lib/thread/iobuf.o 00:12:54.799 LIB libspdk_sock.a 00:12:54.799 SO libspdk_sock.so.9.0 00:12:54.799 SYMLINK libspdk_sock.so 00:12:55.376 CC lib/nvme/nvme_ctrlr_cmd.o 00:12:55.376 CC lib/nvme/nvme_fabric.o 00:12:55.376 CC lib/nvme/nvme_ctrlr.o 00:12:55.376 CC lib/nvme/nvme_ns_cmd.o 00:12:55.376 CC lib/nvme/nvme_pcie.o 00:12:55.376 CC lib/nvme/nvme_pcie_common.o 00:12:55.376 CC lib/nvme/nvme.o 00:12:55.376 CC lib/nvme/nvme_ns.o 00:12:55.376 CC lib/nvme/nvme_qpair.o 00:12:55.638 LIB libspdk_thread.a 00:12:55.902 SO libspdk_thread.so.10.0 00:12:55.902 SYMLINK libspdk_thread.so 00:12:55.902 CC lib/nvme/nvme_quirks.o 00:12:55.902 CC lib/nvme/nvme_transport.o 00:12:56.168 CC lib/nvme/nvme_discovery.o 00:12:56.168 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:12:56.168 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:12:56.168 CC lib/nvme/nvme_tcp.o 00:12:56.168 CC lib/nvme/nvme_opal.o 00:12:56.168 CC lib/nvme/nvme_io_msg.o 00:12:56.433 CC lib/nvme/nvme_poll_group.o 00:12:56.433 CC lib/nvme/nvme_zns.o 00:12:56.696 CC lib/nvme/nvme_stubs.o 00:12:56.696 CC lib/nvme/nvme_auth.o 00:12:56.696 CC lib/nvme/nvme_cuse.o 00:12:56.696 CC lib/nvme/nvme_rdma.o 00:12:56.953 CC lib/accel/accel.o 00:12:56.953 CC lib/blob/blobstore.o 00:12:56.953 CC lib/blob/request.o 00:12:57.216 CC lib/init/json_config.o 00:12:57.216 CC lib/init/subsystem.o 00:12:57.476 CC lib/accel/accel_rpc.o 00:12:57.476 CC lib/init/subsystem_rpc.o 00:12:57.476 CC lib/init/rpc.o 00:12:57.476 CC lib/blob/zeroes.o 00:12:57.476 CC lib/blob/blob_bs_dev.o 00:12:57.476 CC lib/accel/accel_sw.o 00:12:57.476 LIB libspdk_init.a 00:12:57.740 SO libspdk_init.so.5.0 00:12:57.740 SYMLINK libspdk_init.so 00:12:57.740 LIB libspdk_accel.a 00:12:57.740 CC lib/virtio/virtio_pci.o 00:12:57.740 CC lib/virtio/virtio_vfio_user.o 00:12:57.740 CC lib/virtio/virtio.o 00:12:57.740 CC lib/virtio/virtio_vhost_user.o 00:12:57.999 SO libspdk_accel.so.15.0 00:12:57.999 SYMLINK libspdk_accel.so 00:12:57.999 LIB libspdk_nvme.a 00:12:57.999 CC lib/event/app.o 00:12:57.999 CC lib/event/reactor.o 00:12:57.999 CC lib/event/log_rpc.o 00:12:57.999 CC lib/event/app_rpc.o 00:12:58.262 CC lib/event/scheduler_static.o 00:12:58.262 LIB libspdk_virtio.a 00:12:58.262 SO libspdk_nvme.so.13.0 00:12:58.262 CC lib/bdev/bdev.o 00:12:58.262 CC lib/bdev/bdev_zone.o 00:12:58.262 CC lib/bdev/bdev_rpc.o 00:12:58.262 SO libspdk_virtio.so.7.0 00:12:58.262 CC lib/bdev/part.o 00:12:58.262 SYMLINK libspdk_virtio.so 00:12:58.262 CC lib/bdev/scsi_nvme.o 00:12:58.526 LIB libspdk_event.a 00:12:58.526 SYMLINK libspdk_nvme.so 00:12:58.526 SO libspdk_event.so.13.0 00:12:58.783 SYMLINK libspdk_event.so 00:12:59.720 LIB libspdk_blob.a 00:12:59.720 SO libspdk_blob.so.11.0 00:12:59.977 SYMLINK libspdk_blob.so 00:13:00.235 CC lib/blobfs/blobfs.o 00:13:00.235 CC lib/blobfs/tree.o 00:13:00.235 CC lib/lvol/lvol.o 00:13:00.805 LIB libspdk_bdev.a 00:13:00.805 SO libspdk_bdev.so.15.0 00:13:00.805 SYMLINK libspdk_bdev.so 00:13:01.063 CC lib/nbd/nbd.o 00:13:01.063 CC lib/nbd/nbd_rpc.o 00:13:01.063 CC lib/nvmf/ctrlr.o 00:13:01.063 CC lib/nvmf/ctrlr_bdev.o 00:13:01.063 CC lib/nvmf/ctrlr_discovery.o 00:13:01.063 LIB libspdk_blobfs.a 00:13:01.063 CC lib/ftl/ftl_core.o 00:13:01.063 CC lib/ublk/ublk.o 00:13:01.063 CC lib/scsi/dev.o 00:13:01.321 SO libspdk_blobfs.so.10.0 00:13:01.321 LIB libspdk_lvol.a 00:13:01.321 SO libspdk_lvol.so.10.0 00:13:01.321 SYMLINK libspdk_blobfs.so 00:13:01.321 CC lib/nvmf/subsystem.o 00:13:01.321 CC lib/nvmf/nvmf.o 00:13:01.321 SYMLINK libspdk_lvol.so 00:13:01.321 CC lib/ublk/ublk_rpc.o 00:13:01.321 CC lib/scsi/lun.o 00:13:01.581 CC lib/scsi/port.o 00:13:01.582 CC lib/ftl/ftl_init.o 00:13:01.582 LIB libspdk_nbd.a 00:13:01.582 SO libspdk_nbd.so.7.0 00:13:01.582 CC lib/scsi/scsi.o 00:13:01.582 SYMLINK libspdk_nbd.so 00:13:01.582 CC lib/scsi/scsi_bdev.o 00:13:01.582 CC lib/scsi/scsi_pr.o 00:13:01.582 CC lib/scsi/scsi_rpc.o 00:13:01.843 LIB libspdk_ublk.a 00:13:01.843 CC lib/ftl/ftl_layout.o 00:13:01.843 SO libspdk_ublk.so.3.0 00:13:01.843 CC lib/nvmf/nvmf_rpc.o 00:13:01.843 CC lib/scsi/task.o 00:13:01.843 CC lib/ftl/ftl_debug.o 00:13:01.843 SYMLINK libspdk_ublk.so 00:13:01.843 CC lib/ftl/ftl_io.o 00:13:02.101 CC lib/ftl/ftl_sb.o 00:13:02.101 CC lib/ftl/ftl_l2p.o 00:13:02.101 CC lib/ftl/ftl_l2p_flat.o 00:13:02.101 CC lib/ftl/ftl_nv_cache.o 00:13:02.101 LIB libspdk_scsi.a 00:13:02.101 CC lib/ftl/ftl_band.o 00:13:02.101 CC lib/ftl/ftl_band_ops.o 00:13:02.101 SO libspdk_scsi.so.9.0 00:13:02.360 CC lib/ftl/ftl_writer.o 00:13:02.360 CC lib/nvmf/transport.o 00:13:02.360 CC lib/nvmf/tcp.o 00:13:02.360 SYMLINK libspdk_scsi.so 00:13:02.360 CC lib/nvmf/rdma.o 00:13:02.618 CC lib/ftl/ftl_rq.o 00:13:02.618 CC lib/ftl/ftl_reloc.o 00:13:02.618 CC lib/ftl/ftl_l2p_cache.o 00:13:02.618 CC lib/iscsi/conn.o 00:13:02.618 CC lib/vhost/vhost.o 00:13:02.618 CC lib/iscsi/init_grp.o 00:13:02.618 CC lib/vhost/vhost_rpc.o 00:13:02.877 CC lib/ftl/ftl_p2l.o 00:13:02.877 CC lib/iscsi/iscsi.o 00:13:02.877 CC lib/iscsi/md5.o 00:13:02.877 CC lib/iscsi/param.o 00:13:03.135 CC lib/vhost/vhost_scsi.o 00:13:03.135 CC lib/iscsi/portal_grp.o 00:13:03.135 CC lib/vhost/vhost_blk.o 00:13:03.135 CC lib/ftl/mngt/ftl_mngt.o 00:13:03.394 CC lib/vhost/rte_vhost_user.o 00:13:03.394 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:13:03.394 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:13:03.394 CC lib/iscsi/tgt_node.o 00:13:03.394 CC lib/iscsi/iscsi_subsystem.o 00:13:03.394 CC lib/ftl/mngt/ftl_mngt_startup.o 00:13:03.394 CC lib/iscsi/iscsi_rpc.o 00:13:03.661 CC lib/ftl/mngt/ftl_mngt_md.o 00:13:03.661 CC lib/ftl/mngt/ftl_mngt_misc.o 00:13:03.661 CC lib/iscsi/task.o 00:13:03.922 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:13:03.922 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:13:03.922 CC lib/ftl/mngt/ftl_mngt_band.o 00:13:03.922 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:13:03.922 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:13:03.922 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:13:03.922 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:13:03.922 CC lib/ftl/utils/ftl_conf.o 00:13:03.922 CC lib/ftl/utils/ftl_md.o 00:13:04.182 LIB libspdk_iscsi.a 00:13:04.182 CC lib/ftl/utils/ftl_mempool.o 00:13:04.182 CC lib/ftl/utils/ftl_bitmap.o 00:13:04.182 CC lib/ftl/utils/ftl_property.o 00:13:04.182 LIB libspdk_nvmf.a 00:13:04.182 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:13:04.182 SO libspdk_iscsi.so.8.0 00:13:04.182 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:13:04.182 LIB libspdk_vhost.a 00:13:04.182 SO libspdk_nvmf.so.18.0 00:13:04.182 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:13:04.182 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:13:04.182 SO libspdk_vhost.so.8.0 00:13:04.182 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:13:04.440 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:13:04.440 SYMLINK libspdk_iscsi.so 00:13:04.440 CC lib/ftl/upgrade/ftl_sb_v3.o 00:13:04.440 CC lib/ftl/upgrade/ftl_sb_v5.o 00:13:04.440 SYMLINK libspdk_vhost.so 00:13:04.440 CC lib/ftl/nvc/ftl_nvc_dev.o 00:13:04.440 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:13:04.440 CC lib/ftl/base/ftl_base_dev.o 00:13:04.440 CC lib/ftl/base/ftl_base_bdev.o 00:13:04.440 SYMLINK libspdk_nvmf.so 00:13:04.440 CC lib/ftl/ftl_trace.o 00:13:04.701 LIB libspdk_ftl.a 00:13:04.963 SO libspdk_ftl.so.9.0 00:13:05.227 SYMLINK libspdk_ftl.so 00:13:05.812 CC module/env_dpdk/env_dpdk_rpc.o 00:13:05.812 CC module/scheduler/dynamic/scheduler_dynamic.o 00:13:05.812 CC module/scheduler/gscheduler/gscheduler.o 00:13:05.812 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:13:05.812 CC module/accel/ioat/accel_ioat.o 00:13:05.812 CC module/sock/posix/posix.o 00:13:05.812 CC module/blob/bdev/blob_bdev.o 00:13:05.812 CC module/accel/error/accel_error.o 00:13:05.812 CC module/keyring/file/keyring.o 00:13:05.812 CC module/accel/dsa/accel_dsa.o 00:13:05.812 LIB libspdk_env_dpdk_rpc.a 00:13:05.812 SO libspdk_env_dpdk_rpc.so.6.0 00:13:05.812 LIB libspdk_scheduler_gscheduler.a 00:13:05.812 LIB libspdk_scheduler_dpdk_governor.a 00:13:06.084 CC module/keyring/file/keyring_rpc.o 00:13:06.084 SO libspdk_scheduler_gscheduler.so.4.0 00:13:06.084 SO libspdk_scheduler_dpdk_governor.so.4.0 00:13:06.084 CC module/accel/ioat/accel_ioat_rpc.o 00:13:06.084 LIB libspdk_scheduler_dynamic.a 00:13:06.084 CC module/accel/error/accel_error_rpc.o 00:13:06.084 SYMLINK libspdk_env_dpdk_rpc.so 00:13:06.084 SO libspdk_scheduler_dynamic.so.4.0 00:13:06.084 SYMLINK libspdk_scheduler_gscheduler.so 00:13:06.084 SYMLINK libspdk_scheduler_dpdk_governor.so 00:13:06.084 CC module/accel/dsa/accel_dsa_rpc.o 00:13:06.084 LIB libspdk_blob_bdev.a 00:13:06.084 SYMLINK libspdk_scheduler_dynamic.so 00:13:06.084 SO libspdk_blob_bdev.so.11.0 00:13:06.084 LIB libspdk_keyring_file.a 00:13:06.084 LIB libspdk_accel_ioat.a 00:13:06.084 SYMLINK libspdk_blob_bdev.so 00:13:06.084 LIB libspdk_accel_error.a 00:13:06.084 SO libspdk_keyring_file.so.1.0 00:13:06.084 LIB libspdk_accel_dsa.a 00:13:06.084 SO libspdk_accel_ioat.so.6.0 00:13:06.084 SO libspdk_accel_error.so.2.0 00:13:06.084 SYMLINK libspdk_keyring_file.so 00:13:06.359 SO libspdk_accel_dsa.so.5.0 00:13:06.359 CC module/accel/iaa/accel_iaa.o 00:13:06.359 CC module/accel/iaa/accel_iaa_rpc.o 00:13:06.359 SYMLINK libspdk_accel_ioat.so 00:13:06.359 SYMLINK libspdk_accel_error.so 00:13:06.359 SYMLINK libspdk_accel_dsa.so 00:13:06.359 CC module/bdev/gpt/gpt.o 00:13:06.359 CC module/bdev/error/vbdev_error.o 00:13:06.359 CC module/bdev/lvol/vbdev_lvol.o 00:13:06.359 CC module/blobfs/bdev/blobfs_bdev.o 00:13:06.359 CC module/bdev/delay/vbdev_delay.o 00:13:06.359 CC module/bdev/malloc/bdev_malloc.o 00:13:06.359 LIB libspdk_accel_iaa.a 00:13:06.359 CC module/bdev/null/bdev_null.o 00:13:06.359 SO libspdk_accel_iaa.so.3.0 00:13:06.359 LIB libspdk_sock_posix.a 00:13:06.634 CC module/bdev/nvme/bdev_nvme.o 00:13:06.634 SO libspdk_sock_posix.so.6.0 00:13:06.634 SYMLINK libspdk_accel_iaa.so 00:13:06.634 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:13:06.634 CC module/bdev/gpt/vbdev_gpt.o 00:13:06.634 CC module/bdev/null/bdev_null_rpc.o 00:13:06.634 SYMLINK libspdk_sock_posix.so 00:13:06.634 CC module/bdev/nvme/bdev_nvme_rpc.o 00:13:06.634 CC module/bdev/error/vbdev_error_rpc.o 00:13:06.634 CC module/bdev/nvme/nvme_rpc.o 00:13:06.634 LIB libspdk_blobfs_bdev.a 00:13:06.634 CC module/bdev/malloc/bdev_malloc_rpc.o 00:13:06.634 CC module/bdev/delay/vbdev_delay_rpc.o 00:13:06.634 SO libspdk_blobfs_bdev.so.6.0 00:13:06.634 LIB libspdk_bdev_null.a 00:13:06.902 SO libspdk_bdev_null.so.6.0 00:13:06.902 LIB libspdk_bdev_error.a 00:13:06.903 SYMLINK libspdk_blobfs_bdev.so 00:13:06.903 LIB libspdk_bdev_gpt.a 00:13:06.903 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:13:06.903 CC module/bdev/nvme/bdev_mdns_client.o 00:13:06.903 SO libspdk_bdev_error.so.6.0 00:13:06.903 SYMLINK libspdk_bdev_null.so 00:13:06.903 SO libspdk_bdev_gpt.so.6.0 00:13:06.903 CC module/bdev/nvme/vbdev_opal.o 00:13:06.903 LIB libspdk_bdev_malloc.a 00:13:06.903 LIB libspdk_bdev_delay.a 00:13:06.903 CC module/bdev/nvme/vbdev_opal_rpc.o 00:13:06.903 SYMLINK libspdk_bdev_error.so 00:13:06.903 SYMLINK libspdk_bdev_gpt.so 00:13:06.903 SO libspdk_bdev_delay.so.6.0 00:13:06.903 SO libspdk_bdev_malloc.so.6.0 00:13:06.903 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:13:06.903 SYMLINK libspdk_bdev_malloc.so 00:13:06.903 SYMLINK libspdk_bdev_delay.so 00:13:07.164 CC module/bdev/passthru/vbdev_passthru.o 00:13:07.164 LIB libspdk_bdev_lvol.a 00:13:07.164 SO libspdk_bdev_lvol.so.6.0 00:13:07.164 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:13:07.164 CC module/bdev/raid/bdev_raid.o 00:13:07.164 CC module/bdev/split/vbdev_split.o 00:13:07.164 SYMLINK libspdk_bdev_lvol.so 00:13:07.164 CC module/bdev/split/vbdev_split_rpc.o 00:13:07.164 CC module/bdev/zone_block/vbdev_zone_block.o 00:13:07.164 CC module/bdev/aio/bdev_aio.o 00:13:07.424 CC module/bdev/ftl/bdev_ftl.o 00:13:07.424 CC module/bdev/iscsi/bdev_iscsi.o 00:13:07.424 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:13:07.424 LIB libspdk_bdev_passthru.a 00:13:07.424 LIB libspdk_bdev_split.a 00:13:07.424 SO libspdk_bdev_passthru.so.6.0 00:13:07.424 SO libspdk_bdev_split.so.6.0 00:13:07.424 SYMLINK libspdk_bdev_passthru.so 00:13:07.424 CC module/bdev/ftl/bdev_ftl_rpc.o 00:13:07.424 CC module/bdev/aio/bdev_aio_rpc.o 00:13:07.424 SYMLINK libspdk_bdev_split.so 00:13:07.424 CC module/bdev/raid/bdev_raid_rpc.o 00:13:07.690 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:13:07.690 CC module/bdev/raid/bdev_raid_sb.o 00:13:07.690 CC module/bdev/virtio/bdev_virtio_scsi.o 00:13:07.690 CC module/bdev/raid/raid0.o 00:13:07.690 LIB libspdk_bdev_aio.a 00:13:07.690 LIB libspdk_bdev_iscsi.a 00:13:07.690 LIB libspdk_bdev_ftl.a 00:13:07.690 SO libspdk_bdev_aio.so.6.0 00:13:07.690 SO libspdk_bdev_iscsi.so.6.0 00:13:07.690 LIB libspdk_bdev_zone_block.a 00:13:07.690 CC module/bdev/raid/raid1.o 00:13:07.690 SO libspdk_bdev_ftl.so.6.0 00:13:07.690 SYMLINK libspdk_bdev_aio.so 00:13:07.690 SO libspdk_bdev_zone_block.so.6.0 00:13:07.690 CC module/bdev/raid/concat.o 00:13:07.690 SYMLINK libspdk_bdev_iscsi.so 00:13:07.951 CC module/bdev/virtio/bdev_virtio_blk.o 00:13:07.951 SYMLINK libspdk_bdev_ftl.so 00:13:07.951 CC module/bdev/virtio/bdev_virtio_rpc.o 00:13:07.951 SYMLINK libspdk_bdev_zone_block.so 00:13:07.951 LIB libspdk_bdev_raid.a 00:13:08.210 SO libspdk_bdev_raid.so.6.0 00:13:08.210 LIB libspdk_bdev_virtio.a 00:13:08.210 SO libspdk_bdev_virtio.so.6.0 00:13:08.210 SYMLINK libspdk_bdev_raid.so 00:13:08.210 SYMLINK libspdk_bdev_virtio.so 00:13:08.468 LIB libspdk_bdev_nvme.a 00:13:08.740 SO libspdk_bdev_nvme.so.7.0 00:13:08.740 SYMLINK libspdk_bdev_nvme.so 00:13:09.318 CC module/event/subsystems/scheduler/scheduler.o 00:13:09.318 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:13:09.318 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:13:09.318 CC module/event/subsystems/iobuf/iobuf.o 00:13:09.318 CC module/event/subsystems/vmd/vmd.o 00:13:09.318 CC module/event/subsystems/vmd/vmd_rpc.o 00:13:09.318 CC module/event/subsystems/sock/sock.o 00:13:09.318 CC module/event/subsystems/keyring/keyring.o 00:13:09.589 LIB libspdk_event_scheduler.a 00:13:09.590 LIB libspdk_event_sock.a 00:13:09.590 LIB libspdk_event_iobuf.a 00:13:09.590 LIB libspdk_event_vhost_blk.a 00:13:09.590 LIB libspdk_event_vmd.a 00:13:09.590 SO libspdk_event_scheduler.so.4.0 00:13:09.590 LIB libspdk_event_keyring.a 00:13:09.590 SO libspdk_event_sock.so.5.0 00:13:09.590 SO libspdk_event_vhost_blk.so.3.0 00:13:09.590 SO libspdk_event_vmd.so.6.0 00:13:09.590 SO libspdk_event_iobuf.so.3.0 00:13:09.590 SO libspdk_event_keyring.so.1.0 00:13:09.590 SYMLINK libspdk_event_scheduler.so 00:13:09.590 SYMLINK libspdk_event_sock.so 00:13:09.590 SYMLINK libspdk_event_vhost_blk.so 00:13:09.590 SYMLINK libspdk_event_iobuf.so 00:13:09.590 SYMLINK libspdk_event_vmd.so 00:13:09.590 SYMLINK libspdk_event_keyring.so 00:13:10.165 CC module/event/subsystems/accel/accel.o 00:13:10.165 LIB libspdk_event_accel.a 00:13:10.165 SO libspdk_event_accel.so.6.0 00:13:10.424 SYMLINK libspdk_event_accel.so 00:13:10.683 CC module/event/subsystems/bdev/bdev.o 00:13:10.942 LIB libspdk_event_bdev.a 00:13:10.942 SO libspdk_event_bdev.so.6.0 00:13:10.942 SYMLINK libspdk_event_bdev.so 00:13:11.536 CC module/event/subsystems/scsi/scsi.o 00:13:11.536 CC module/event/subsystems/nbd/nbd.o 00:13:11.536 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:13:11.536 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:13:11.536 CC module/event/subsystems/ublk/ublk.o 00:13:11.536 LIB libspdk_event_nbd.a 00:13:11.536 LIB libspdk_event_scsi.a 00:13:11.536 SO libspdk_event_nbd.so.6.0 00:13:11.536 LIB libspdk_event_ublk.a 00:13:11.536 SO libspdk_event_scsi.so.6.0 00:13:11.536 SO libspdk_event_ublk.so.3.0 00:13:11.536 SYMLINK libspdk_event_nbd.so 00:13:11.536 SYMLINK libspdk_event_scsi.so 00:13:11.536 LIB libspdk_event_nvmf.a 00:13:11.536 SYMLINK libspdk_event_ublk.so 00:13:11.809 SO libspdk_event_nvmf.so.6.0 00:13:11.809 SYMLINK libspdk_event_nvmf.so 00:13:12.068 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:13:12.068 CC module/event/subsystems/iscsi/iscsi.o 00:13:12.068 LIB libspdk_event_vhost_scsi.a 00:13:12.068 LIB libspdk_event_iscsi.a 00:13:12.068 SO libspdk_event_vhost_scsi.so.3.0 00:13:12.327 SO libspdk_event_iscsi.so.6.0 00:13:12.327 SYMLINK libspdk_event_vhost_scsi.so 00:13:12.327 SYMLINK libspdk_event_iscsi.so 00:13:12.586 SO libspdk.so.6.0 00:13:12.586 SYMLINK libspdk.so 00:13:12.845 TEST_HEADER include/spdk/accel.h 00:13:12.845 TEST_HEADER include/spdk/accel_module.h 00:13:12.845 TEST_HEADER include/spdk/assert.h 00:13:12.845 TEST_HEADER include/spdk/barrier.h 00:13:12.845 TEST_HEADER include/spdk/base64.h 00:13:12.845 TEST_HEADER include/spdk/bdev.h 00:13:12.845 TEST_HEADER include/spdk/bdev_module.h 00:13:12.845 TEST_HEADER include/spdk/bdev_zone.h 00:13:12.845 TEST_HEADER include/spdk/bit_array.h 00:13:12.845 TEST_HEADER include/spdk/bit_pool.h 00:13:12.845 TEST_HEADER include/spdk/blob_bdev.h 00:13:12.845 TEST_HEADER include/spdk/blobfs_bdev.h 00:13:12.845 CXX app/trace/trace.o 00:13:12.845 TEST_HEADER include/spdk/blobfs.h 00:13:12.845 TEST_HEADER include/spdk/blob.h 00:13:12.845 TEST_HEADER include/spdk/conf.h 00:13:12.845 TEST_HEADER include/spdk/config.h 00:13:12.845 TEST_HEADER include/spdk/cpuset.h 00:13:12.845 TEST_HEADER include/spdk/crc16.h 00:13:12.845 TEST_HEADER include/spdk/crc32.h 00:13:12.845 TEST_HEADER include/spdk/crc64.h 00:13:12.845 TEST_HEADER include/spdk/dif.h 00:13:12.845 TEST_HEADER include/spdk/dma.h 00:13:12.845 TEST_HEADER include/spdk/endian.h 00:13:12.845 TEST_HEADER include/spdk/env_dpdk.h 00:13:12.845 TEST_HEADER include/spdk/env.h 00:13:12.845 TEST_HEADER include/spdk/event.h 00:13:12.845 TEST_HEADER include/spdk/fd_group.h 00:13:12.845 TEST_HEADER include/spdk/fd.h 00:13:12.845 TEST_HEADER include/spdk/file.h 00:13:12.845 TEST_HEADER include/spdk/ftl.h 00:13:12.845 TEST_HEADER include/spdk/gpt_spec.h 00:13:12.845 TEST_HEADER include/spdk/hexlify.h 00:13:12.845 TEST_HEADER include/spdk/histogram_data.h 00:13:12.845 TEST_HEADER include/spdk/idxd.h 00:13:12.845 TEST_HEADER include/spdk/idxd_spec.h 00:13:12.845 TEST_HEADER include/spdk/init.h 00:13:12.845 TEST_HEADER include/spdk/ioat.h 00:13:12.845 TEST_HEADER include/spdk/ioat_spec.h 00:13:12.845 TEST_HEADER include/spdk/iscsi_spec.h 00:13:12.845 TEST_HEADER include/spdk/json.h 00:13:12.845 TEST_HEADER include/spdk/jsonrpc.h 00:13:12.845 TEST_HEADER include/spdk/keyring.h 00:13:12.845 TEST_HEADER include/spdk/keyring_module.h 00:13:12.845 TEST_HEADER include/spdk/likely.h 00:13:12.845 TEST_HEADER include/spdk/log.h 00:13:12.845 TEST_HEADER include/spdk/lvol.h 00:13:12.845 CC test/event/event_perf/event_perf.o 00:13:12.845 TEST_HEADER include/spdk/memory.h 00:13:12.845 TEST_HEADER include/spdk/mmio.h 00:13:12.845 TEST_HEADER include/spdk/nbd.h 00:13:12.845 TEST_HEADER include/spdk/notify.h 00:13:12.845 TEST_HEADER include/spdk/nvme.h 00:13:12.845 CC examples/accel/perf/accel_perf.o 00:13:12.845 CC test/dma/test_dma/test_dma.o 00:13:12.845 TEST_HEADER include/spdk/nvme_intel.h 00:13:12.845 CC test/accel/dif/dif.o 00:13:12.845 TEST_HEADER include/spdk/nvme_ocssd.h 00:13:12.845 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:13:12.845 TEST_HEADER include/spdk/nvme_spec.h 00:13:12.845 TEST_HEADER include/spdk/nvme_zns.h 00:13:12.845 CC test/bdev/bdevio/bdevio.o 00:13:12.845 TEST_HEADER include/spdk/nvmf_cmd.h 00:13:12.845 CC test/blobfs/mkfs/mkfs.o 00:13:12.845 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:13:12.845 TEST_HEADER include/spdk/nvmf.h 00:13:12.845 TEST_HEADER include/spdk/nvmf_spec.h 00:13:13.103 TEST_HEADER include/spdk/nvmf_transport.h 00:13:13.103 TEST_HEADER include/spdk/opal.h 00:13:13.103 TEST_HEADER include/spdk/opal_spec.h 00:13:13.103 TEST_HEADER include/spdk/pci_ids.h 00:13:13.103 TEST_HEADER include/spdk/pipe.h 00:13:13.103 TEST_HEADER include/spdk/queue.h 00:13:13.103 TEST_HEADER include/spdk/reduce.h 00:13:13.103 TEST_HEADER include/spdk/rpc.h 00:13:13.103 TEST_HEADER include/spdk/scheduler.h 00:13:13.103 TEST_HEADER include/spdk/scsi.h 00:13:13.103 TEST_HEADER include/spdk/scsi_spec.h 00:13:13.103 CC test/app/bdev_svc/bdev_svc.o 00:13:13.103 TEST_HEADER include/spdk/sock.h 00:13:13.103 TEST_HEADER include/spdk/stdinc.h 00:13:13.103 TEST_HEADER include/spdk/string.h 00:13:13.103 TEST_HEADER include/spdk/thread.h 00:13:13.103 TEST_HEADER include/spdk/trace.h 00:13:13.103 TEST_HEADER include/spdk/trace_parser.h 00:13:13.103 TEST_HEADER include/spdk/tree.h 00:13:13.103 TEST_HEADER include/spdk/ublk.h 00:13:13.103 TEST_HEADER include/spdk/util.h 00:13:13.103 TEST_HEADER include/spdk/uuid.h 00:13:13.103 TEST_HEADER include/spdk/version.h 00:13:13.103 TEST_HEADER include/spdk/vfio_user_pci.h 00:13:13.103 TEST_HEADER include/spdk/vfio_user_spec.h 00:13:13.103 TEST_HEADER include/spdk/vhost.h 00:13:13.103 TEST_HEADER include/spdk/vmd.h 00:13:13.103 TEST_HEADER include/spdk/xor.h 00:13:13.103 TEST_HEADER include/spdk/zipf.h 00:13:13.103 CC test/env/mem_callbacks/mem_callbacks.o 00:13:13.103 CXX test/cpp_headers/accel.o 00:13:13.103 LINK event_perf 00:13:13.103 LINK bdev_svc 00:13:13.103 LINK mkfs 00:13:13.365 CXX test/cpp_headers/accel_module.o 00:13:13.365 LINK spdk_trace 00:13:13.365 LINK test_dma 00:13:13.365 LINK bdevio 00:13:13.365 LINK dif 00:13:13.365 LINK accel_perf 00:13:13.365 CC test/event/reactor/reactor.o 00:13:13.365 CXX test/cpp_headers/assert.o 00:13:13.365 CXX test/cpp_headers/barrier.o 00:13:13.624 LINK reactor 00:13:13.624 CXX test/cpp_headers/base64.o 00:13:13.624 CC app/trace_record/trace_record.o 00:13:13.624 LINK mem_callbacks 00:13:13.624 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:13:13.624 CC test/event/reactor_perf/reactor_perf.o 00:13:13.624 CC test/event/app_repeat/app_repeat.o 00:13:13.624 CC test/event/scheduler/scheduler.o 00:13:13.882 CXX test/cpp_headers/bdev.o 00:13:13.882 CC examples/bdev/hello_world/hello_bdev.o 00:13:13.882 LINK reactor_perf 00:13:13.882 LINK spdk_trace_record 00:13:13.882 CC test/env/vtophys/vtophys.o 00:13:13.882 LINK app_repeat 00:13:13.882 CC test/lvol/esnap/esnap.o 00:13:13.882 CC test/nvme/aer/aer.o 00:13:13.882 CXX test/cpp_headers/bdev_module.o 00:13:13.882 LINK scheduler 00:13:14.141 LINK vtophys 00:13:14.141 LINK hello_bdev 00:13:14.141 LINK nvme_fuzz 00:13:14.141 CC test/rpc_client/rpc_client_test.o 00:13:14.141 CC app/nvmf_tgt/nvmf_main.o 00:13:14.141 CXX test/cpp_headers/bdev_zone.o 00:13:14.141 LINK aer 00:13:14.399 LINK rpc_client_test 00:13:14.399 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:13:14.399 CC examples/blob/hello_world/hello_blob.o 00:13:14.399 CC test/env/memory/memory_ut.o 00:13:14.399 LINK nvmf_tgt 00:13:14.399 CXX test/cpp_headers/bit_array.o 00:13:14.399 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:13:14.399 CC examples/bdev/bdevperf/bdevperf.o 00:13:14.399 CXX test/cpp_headers/bit_pool.o 00:13:14.399 LINK env_dpdk_post_init 00:13:14.399 CC test/nvme/reset/reset.o 00:13:14.400 LINK hello_blob 00:13:14.659 CXX test/cpp_headers/blob_bdev.o 00:13:14.659 CC test/thread/poller_perf/poller_perf.o 00:13:14.659 CC app/iscsi_tgt/iscsi_tgt.o 00:13:14.659 CC test/nvme/sgl/sgl.o 00:13:14.659 LINK reset 00:13:14.995 CXX test/cpp_headers/blobfs_bdev.o 00:13:14.996 LINK poller_perf 00:13:14.996 LINK iscsi_tgt 00:13:14.996 CC examples/blob/cli/blobcli.o 00:13:14.996 LINK sgl 00:13:14.996 CXX test/cpp_headers/blobfs.o 00:13:14.996 CC test/app/histogram_perf/histogram_perf.o 00:13:15.262 LINK memory_ut 00:13:15.262 CC test/env/pci/pci_ut.o 00:13:15.262 LINK bdevperf 00:13:15.262 CXX test/cpp_headers/blob.o 00:13:15.262 LINK histogram_perf 00:13:15.262 CC test/nvme/e2edp/nvme_dp.o 00:13:15.262 CC app/spdk_tgt/spdk_tgt.o 00:13:15.262 LINK blobcli 00:13:15.262 CXX test/cpp_headers/conf.o 00:13:15.262 CXX test/cpp_headers/config.o 00:13:15.520 CC test/app/jsoncat/jsoncat.o 00:13:15.520 CC test/nvme/overhead/overhead.o 00:13:15.520 LINK pci_ut 00:13:15.520 LINK spdk_tgt 00:13:15.520 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:13:15.520 CXX test/cpp_headers/cpuset.o 00:13:15.520 LINK nvme_dp 00:13:15.520 LINK jsoncat 00:13:15.520 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:13:15.778 CXX test/cpp_headers/crc16.o 00:13:15.778 LINK overhead 00:13:15.778 CC examples/ioat/perf/perf.o 00:13:15.778 CC app/spdk_lspci/spdk_lspci.o 00:13:15.778 CXX test/cpp_headers/crc32.o 00:13:15.778 LINK iscsi_fuzz 00:13:15.778 CC examples/nvme/hello_world/hello_world.o 00:13:15.778 CC test/nvme/err_injection/err_injection.o 00:13:16.036 CC examples/sock/hello_world/hello_sock.o 00:13:16.036 LINK ioat_perf 00:13:16.036 LINK spdk_lspci 00:13:16.036 CXX test/cpp_headers/crc64.o 00:13:16.036 LINK vhost_fuzz 00:13:16.036 CC test/nvme/startup/startup.o 00:13:16.036 LINK err_injection 00:13:16.036 LINK hello_world 00:13:16.036 LINK hello_sock 00:13:16.036 CXX test/cpp_headers/dif.o 00:13:16.293 LINK startup 00:13:16.293 CC examples/ioat/verify/verify.o 00:13:16.293 CC test/nvme/reserve/reserve.o 00:13:16.293 CC app/spdk_nvme_perf/perf.o 00:13:16.293 CC test/app/stub/stub.o 00:13:16.293 CXX test/cpp_headers/dma.o 00:13:16.293 CC examples/nvme/reconnect/reconnect.o 00:13:16.551 LINK verify 00:13:16.551 LINK reserve 00:13:16.551 CC examples/nvme/nvme_manage/nvme_manage.o 00:13:16.551 CC examples/vmd/lsvmd/lsvmd.o 00:13:16.551 CC examples/nvme/arbitration/arbitration.o 00:13:16.551 LINK stub 00:13:16.551 CXX test/cpp_headers/endian.o 00:13:16.551 LINK lsvmd 00:13:16.551 CXX test/cpp_headers/env_dpdk.o 00:13:16.809 LINK reconnect 00:13:16.809 CC examples/nvme/hotplug/hotplug.o 00:13:16.809 CC test/nvme/simple_copy/simple_copy.o 00:13:16.809 LINK arbitration 00:13:16.809 CC test/nvme/connect_stress/connect_stress.o 00:13:16.809 CXX test/cpp_headers/env.o 00:13:16.809 LINK nvme_manage 00:13:17.067 CC examples/vmd/led/led.o 00:13:17.067 LINK hotplug 00:13:17.068 LINK spdk_nvme_perf 00:13:17.068 LINK simple_copy 00:13:17.068 CXX test/cpp_headers/event.o 00:13:17.068 LINK connect_stress 00:13:17.068 CC test/nvme/boot_partition/boot_partition.o 00:13:17.068 LINK led 00:13:17.068 CXX test/cpp_headers/fd_group.o 00:13:17.326 CXX test/cpp_headers/fd.o 00:13:17.326 CXX test/cpp_headers/file.o 00:13:17.326 LINK boot_partition 00:13:17.326 CC test/nvme/compliance/nvme_compliance.o 00:13:17.326 CXX test/cpp_headers/ftl.o 00:13:17.326 CC app/spdk_nvme_identify/identify.o 00:13:17.326 CC examples/nvme/cmb_copy/cmb_copy.o 00:13:17.326 CC app/spdk_nvme_discover/discovery_aer.o 00:13:17.585 CC app/spdk_top/spdk_top.o 00:13:17.585 CC app/vhost/vhost.o 00:13:17.585 CXX test/cpp_headers/gpt_spec.o 00:13:17.585 LINK cmb_copy 00:13:17.585 CC app/spdk_dd/spdk_dd.o 00:13:17.585 LINK nvme_compliance 00:13:17.585 LINK spdk_nvme_discover 00:13:17.843 LINK vhost 00:13:17.843 CXX test/cpp_headers/hexlify.o 00:13:17.843 CC app/fio/nvme/fio_plugin.o 00:13:17.843 CC examples/nvme/abort/abort.o 00:13:17.843 CXX test/cpp_headers/histogram_data.o 00:13:18.101 LINK spdk_dd 00:13:18.101 CC test/nvme/fused_ordering/fused_ordering.o 00:13:18.101 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:13:18.101 CXX test/cpp_headers/idxd.o 00:13:18.101 CC test/nvme/doorbell_aers/doorbell_aers.o 00:13:18.101 LINK esnap 00:13:18.101 LINK spdk_nvme_identify 00:13:18.101 LINK fused_ordering 00:13:18.101 LINK pmr_persistence 00:13:18.369 CXX test/cpp_headers/idxd_spec.o 00:13:18.369 LINK spdk_top 00:13:18.369 LINK spdk_nvme 00:13:18.369 LINK abort 00:13:18.369 LINK doorbell_aers 00:13:18.369 CC test/nvme/fdp/fdp.o 00:13:18.370 CXX test/cpp_headers/init.o 00:13:18.370 CC app/fio/bdev/fio_plugin.o 00:13:18.642 CXX test/cpp_headers/ioat.o 00:13:18.642 CC test/nvme/cuse/cuse.o 00:13:18.642 CC examples/util/zipf/zipf.o 00:13:18.642 CC examples/nvmf/nvmf/nvmf.o 00:13:18.642 LINK fdp 00:13:18.642 CC examples/idxd/perf/perf.o 00:13:18.642 CC examples/thread/thread/thread_ex.o 00:13:18.642 LINK zipf 00:13:18.642 CXX test/cpp_headers/ioat_spec.o 00:13:18.901 CXX test/cpp_headers/iscsi_spec.o 00:13:18.901 CC examples/interrupt_tgt/interrupt_tgt.o 00:13:18.901 LINK thread 00:13:18.901 LINK nvmf 00:13:18.901 CXX test/cpp_headers/json.o 00:13:18.901 CXX test/cpp_headers/jsonrpc.o 00:13:18.901 LINK spdk_bdev 00:13:18.901 LINK idxd_perf 00:13:18.901 CXX test/cpp_headers/keyring.o 00:13:19.160 LINK interrupt_tgt 00:13:19.160 CXX test/cpp_headers/keyring_module.o 00:13:19.160 CXX test/cpp_headers/likely.o 00:13:19.160 CXX test/cpp_headers/log.o 00:13:19.160 CXX test/cpp_headers/lvol.o 00:13:19.160 CXX test/cpp_headers/memory.o 00:13:19.160 CXX test/cpp_headers/mmio.o 00:13:19.160 CXX test/cpp_headers/nbd.o 00:13:19.160 CXX test/cpp_headers/notify.o 00:13:19.160 CXX test/cpp_headers/nvme.o 00:13:19.160 CXX test/cpp_headers/nvme_intel.o 00:13:19.160 CXX test/cpp_headers/nvme_ocssd.o 00:13:19.160 CXX test/cpp_headers/nvme_ocssd_spec.o 00:13:19.418 CXX test/cpp_headers/nvme_spec.o 00:13:19.418 CXX test/cpp_headers/nvme_zns.o 00:13:19.418 CXX test/cpp_headers/nvmf_cmd.o 00:13:19.418 CXX test/cpp_headers/nvmf_fc_spec.o 00:13:19.418 CXX test/cpp_headers/nvmf.o 00:13:19.418 CXX test/cpp_headers/nvmf_spec.o 00:13:19.418 CXX test/cpp_headers/nvmf_transport.o 00:13:19.418 CXX test/cpp_headers/opal.o 00:13:19.418 CXX test/cpp_headers/opal_spec.o 00:13:19.418 CXX test/cpp_headers/pipe.o 00:13:19.418 CXX test/cpp_headers/pci_ids.o 00:13:19.677 LINK cuse 00:13:19.677 CXX test/cpp_headers/queue.o 00:13:19.677 CXX test/cpp_headers/reduce.o 00:13:19.677 CXX test/cpp_headers/rpc.o 00:13:19.677 CXX test/cpp_headers/scheduler.o 00:13:19.677 CXX test/cpp_headers/scsi.o 00:13:19.677 CXX test/cpp_headers/scsi_spec.o 00:13:19.677 CXX test/cpp_headers/sock.o 00:13:19.677 CXX test/cpp_headers/stdinc.o 00:13:19.677 CXX test/cpp_headers/string.o 00:13:19.935 CXX test/cpp_headers/thread.o 00:13:19.935 CXX test/cpp_headers/trace.o 00:13:19.935 CXX test/cpp_headers/trace_parser.o 00:13:19.935 CXX test/cpp_headers/tree.o 00:13:19.935 CXX test/cpp_headers/ublk.o 00:13:19.935 CXX test/cpp_headers/util.o 00:13:19.935 CXX test/cpp_headers/uuid.o 00:13:19.935 CXX test/cpp_headers/version.o 00:13:19.935 CXX test/cpp_headers/vfio_user_pci.o 00:13:19.935 CXX test/cpp_headers/vfio_user_spec.o 00:13:19.935 CXX test/cpp_headers/vhost.o 00:13:19.935 CXX test/cpp_headers/vmd.o 00:13:19.935 CXX test/cpp_headers/xor.o 00:13:19.935 CXX test/cpp_headers/zipf.o 00:13:25.223 00:13:25.223 real 1m10.233s 00:13:25.223 user 6m6.950s 00:13:25.223 sys 1m59.272s 00:13:25.223 15:00:40 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:13:25.223 15:00:40 -- common/autotest_common.sh@10 -- $ set +x 00:13:25.223 ************************************ 00:13:25.223 END TEST make 00:13:25.223 ************************************ 00:13:25.223 15:00:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:13:25.223 15:00:40 -- pm/common@30 -- $ signal_monitor_resources TERM 00:13:25.223 15:00:40 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:13:25.223 15:00:40 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:25.223 15:00:40 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:13:25.223 15:00:40 -- pm/common@45 -- $ pid=5146 00:13:25.223 15:00:40 -- pm/common@52 -- $ sudo kill -TERM 5146 00:13:25.223 15:00:40 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:25.223 15:00:40 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:13:25.223 15:00:40 -- pm/common@45 -- $ pid=5147 00:13:25.223 15:00:40 -- pm/common@52 -- $ sudo kill -TERM 5147 00:13:25.223 15:00:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:25.535 15:00:40 -- nvmf/common.sh@7 -- # uname -s 00:13:25.535 15:00:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.535 15:00:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.535 15:00:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.535 15:00:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.535 15:00:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.535 15:00:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.535 15:00:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.535 15:00:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.535 15:00:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.535 15:00:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.535 15:00:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:13:25.535 15:00:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:13:25.535 15:00:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.535 15:00:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.535 15:00:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:25.535 15:00:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.535 15:00:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.535 15:00:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.535 15:00:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.535 15:00:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.536 15:00:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.536 15:00:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.536 15:00:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.536 15:00:40 -- paths/export.sh@5 -- # export PATH 00:13:25.536 15:00:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.536 15:00:40 -- nvmf/common.sh@47 -- # : 0 00:13:25.536 15:00:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.536 15:00:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.536 15:00:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.536 15:00:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.536 15:00:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.536 15:00:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.536 15:00:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.536 15:00:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.536 15:00:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:13:25.536 15:00:40 -- spdk/autotest.sh@32 -- # uname -s 00:13:25.536 15:00:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:13:25.536 15:00:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:13:25.536 15:00:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:13:25.536 15:00:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:13:25.536 15:00:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:13:25.536 15:00:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:13:25.536 15:00:41 -- spdk/autotest.sh@46 -- # type -P udevadm 00:13:25.536 15:00:41 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:13:25.536 15:00:41 -- spdk/autotest.sh@48 -- # udevadm_pid=53934 00:13:25.536 15:00:41 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:13:25.536 15:00:41 -- pm/common@17 -- # local monitor 00:13:25.536 15:00:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:13:25.536 15:00:41 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:13:25.536 15:00:41 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=53936 00:13:25.536 15:00:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:13:25.536 15:00:41 -- pm/common@21 -- # date +%s 00:13:25.536 15:00:41 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=53938 00:13:25.536 15:00:41 -- pm/common@26 -- # sleep 1 00:13:25.536 15:00:41 -- pm/common@21 -- # date +%s 00:13:25.536 15:00:41 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713452441 00:13:25.536 15:00:41 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713452441 00:13:25.536 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713452441_collect-vmstat.pm.log 00:13:25.536 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713452441_collect-cpu-load.pm.log 00:13:26.471 15:00:42 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:13:26.471 15:00:42 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:13:26.471 15:00:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:26.471 15:00:42 -- common/autotest_common.sh@10 -- # set +x 00:13:26.471 15:00:42 -- spdk/autotest.sh@59 -- # create_test_list 00:13:26.471 15:00:42 -- common/autotest_common.sh@734 -- # xtrace_disable 00:13:26.471 15:00:42 -- common/autotest_common.sh@10 -- # set +x 00:13:26.471 15:00:42 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:13:26.471 15:00:42 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:13:26.471 15:00:42 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:13:26.471 15:00:42 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:13:26.471 15:00:42 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:13:26.471 15:00:42 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:13:26.471 15:00:42 -- common/autotest_common.sh@1441 -- # uname 00:13:26.471 15:00:42 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:13:26.471 15:00:42 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:13:26.471 15:00:42 -- common/autotest_common.sh@1461 -- # uname 00:13:26.471 15:00:42 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:13:26.471 15:00:42 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:13:26.471 15:00:42 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:13:26.471 15:00:42 -- spdk/autotest.sh@72 -- # hash lcov 00:13:26.472 15:00:42 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:13:26.472 15:00:42 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:13:26.472 --rc lcov_branch_coverage=1 00:13:26.472 --rc lcov_function_coverage=1 00:13:26.472 --rc genhtml_branch_coverage=1 00:13:26.472 --rc genhtml_function_coverage=1 00:13:26.472 --rc genhtml_legend=1 00:13:26.472 --rc geninfo_all_blocks=1 00:13:26.472 ' 00:13:26.472 15:00:42 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:13:26.472 --rc lcov_branch_coverage=1 00:13:26.472 --rc lcov_function_coverage=1 00:13:26.472 --rc genhtml_branch_coverage=1 00:13:26.472 --rc genhtml_function_coverage=1 00:13:26.472 --rc genhtml_legend=1 00:13:26.472 --rc geninfo_all_blocks=1 00:13:26.472 ' 00:13:26.472 15:00:42 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:13:26.472 --rc lcov_branch_coverage=1 00:13:26.472 --rc lcov_function_coverage=1 00:13:26.472 --rc genhtml_branch_coverage=1 00:13:26.472 --rc genhtml_function_coverage=1 00:13:26.472 --rc genhtml_legend=1 00:13:26.472 --rc geninfo_all_blocks=1 00:13:26.472 --no-external' 00:13:26.472 15:00:42 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:13:26.472 --rc lcov_branch_coverage=1 00:13:26.472 --rc lcov_function_coverage=1 00:13:26.472 --rc genhtml_branch_coverage=1 00:13:26.472 --rc genhtml_function_coverage=1 00:13:26.472 --rc genhtml_legend=1 00:13:26.472 --rc geninfo_all_blocks=1 00:13:26.472 --no-external' 00:13:26.472 15:00:42 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:13:26.731 lcov: LCOV version 1.14 00:13:26.731 15:00:42 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:13:34.850 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:13:34.850 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:13:34.850 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:13:34.850 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:13:34.850 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:13:34.850 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:13:41.409 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:13:41.409 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:13:53.621 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:13:53.621 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:13:53.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:13:53.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:13:53.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:13:53.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:13:53.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:13:53.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:13:53.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:13:53.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:13:53.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:13:53.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:13:53.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:13:53.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:13:53.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:13:53.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:13:53.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:13:53.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:13:53.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:13:53.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:13:53.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:13:53.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:13:53.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:13:53.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:13:53.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:13:53.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:13:53.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:13:53.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:13:53.883 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:13:53.883 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:13:53.883 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:13:53.883 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:13:53.883 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:13:53.883 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:13:53.883 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:13:53.883 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:13:53.883 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:13:53.883 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:13:53.883 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:13:53.883 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:13:53.883 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:13:53.883 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:13:53.883 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:13:53.883 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:13:53.883 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:13:53.883 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:13:53.883 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:13:57.175 15:01:12 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:13:57.175 15:01:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:57.175 15:01:12 -- common/autotest_common.sh@10 -- # set +x 00:13:57.175 15:01:12 -- spdk/autotest.sh@91 -- # rm -f 00:13:57.175 15:01:12 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:58.108 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:58.108 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:13:58.108 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:13:58.108 15:01:13 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:13:58.108 15:01:13 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:58.108 15:01:13 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:58.108 15:01:13 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:58.108 15:01:13 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:58.108 15:01:13 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:58.108 15:01:13 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:58.108 15:01:13 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:58.108 15:01:13 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:58.108 15:01:13 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:58.108 15:01:13 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:58.108 15:01:13 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:58.108 15:01:13 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:58.108 15:01:13 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:58.108 15:01:13 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:58.108 15:01:13 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:13:58.108 15:01:13 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:13:58.108 15:01:13 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:13:58.108 15:01:13 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:58.108 15:01:13 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:58.108 15:01:13 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:13:58.108 15:01:13 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:13:58.108 15:01:13 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:13:58.108 15:01:13 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:58.108 15:01:13 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:13:58.108 15:01:13 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:58.108 15:01:13 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:58.108 15:01:13 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:13:58.108 15:01:13 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:13:58.108 15:01:13 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:58.108 No valid GPT data, bailing 00:13:58.108 15:01:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:58.108 15:01:13 -- scripts/common.sh@391 -- # pt= 00:13:58.108 15:01:13 -- scripts/common.sh@392 -- # return 1 00:13:58.109 15:01:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:13:58.109 1+0 records in 00:13:58.109 1+0 records out 00:13:58.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00593591 s, 177 MB/s 00:13:58.109 15:01:13 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:58.109 15:01:13 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:58.109 15:01:13 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:13:58.109 15:01:13 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:13:58.109 15:01:13 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:13:58.109 No valid GPT data, bailing 00:13:58.109 15:01:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:13:58.109 15:01:13 -- scripts/common.sh@391 -- # pt= 00:13:58.109 15:01:13 -- scripts/common.sh@392 -- # return 1 00:13:58.109 15:01:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:13:58.109 1+0 records in 00:13:58.109 1+0 records out 00:13:58.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00643604 s, 163 MB/s 00:13:58.109 15:01:13 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:58.109 15:01:13 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:58.109 15:01:13 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:13:58.109 15:01:13 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:13:58.109 15:01:13 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:13:58.367 No valid GPT data, bailing 00:13:58.367 15:01:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:13:58.367 15:01:13 -- scripts/common.sh@391 -- # pt= 00:13:58.367 15:01:13 -- scripts/common.sh@392 -- # return 1 00:13:58.367 15:01:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:13:58.367 1+0 records in 00:13:58.367 1+0 records out 00:13:58.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475407 s, 221 MB/s 00:13:58.367 15:01:13 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:13:58.367 15:01:13 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:13:58.367 15:01:13 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:13:58.367 15:01:13 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:13:58.367 15:01:13 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:13:58.367 No valid GPT data, bailing 00:13:58.368 15:01:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:13:58.368 15:01:13 -- scripts/common.sh@391 -- # pt= 00:13:58.368 15:01:13 -- scripts/common.sh@392 -- # return 1 00:13:58.368 15:01:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:13:58.368 1+0 records in 00:13:58.368 1+0 records out 00:13:58.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0036227 s, 289 MB/s 00:13:58.368 15:01:13 -- spdk/autotest.sh@118 -- # sync 00:13:58.368 15:01:13 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:13:58.368 15:01:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:13:58.368 15:01:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:14:01.652 15:01:16 -- spdk/autotest.sh@124 -- # uname -s 00:14:01.652 15:01:16 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:14:01.652 15:01:16 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:14:01.652 15:01:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:01.652 15:01:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:01.652 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:01.652 ************************************ 00:14:01.652 START TEST setup.sh 00:14:01.653 ************************************ 00:14:01.653 15:01:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:14:01.653 * Looking for test storage... 00:14:01.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:01.653 15:01:16 -- setup/test-setup.sh@10 -- # uname -s 00:14:01.653 15:01:16 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:14:01.653 15:01:16 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:14:01.653 15:01:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:01.653 15:01:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:01.653 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:14:01.653 ************************************ 00:14:01.653 START TEST acl 00:14:01.653 ************************************ 00:14:01.653 15:01:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:14:01.653 * Looking for test storage... 00:14:01.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:01.653 15:01:17 -- setup/acl.sh@10 -- # get_zoned_devs 00:14:01.653 15:01:17 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:01.653 15:01:17 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:01.653 15:01:17 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:01.653 15:01:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:01.653 15:01:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:01.653 15:01:17 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:01.653 15:01:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:01.653 15:01:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:01.653 15:01:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:01.653 15:01:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:01.653 15:01:17 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:01.653 15:01:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:01.653 15:01:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:01.653 15:01:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:01.653 15:01:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:14:01.653 15:01:17 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:14:01.653 15:01:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:14:01.653 15:01:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:01.653 15:01:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:01.653 15:01:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:14:01.653 15:01:17 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:14:01.653 15:01:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:14:01.653 15:01:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:01.653 15:01:17 -- setup/acl.sh@12 -- # devs=() 00:14:01.653 15:01:17 -- setup/acl.sh@12 -- # declare -a devs 00:14:01.653 15:01:17 -- setup/acl.sh@13 -- # drivers=() 00:14:01.653 15:01:17 -- setup/acl.sh@13 -- # declare -A drivers 00:14:01.653 15:01:17 -- setup/acl.sh@51 -- # setup reset 00:14:01.653 15:01:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:01.653 15:01:17 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:02.589 15:01:18 -- setup/acl.sh@52 -- # collect_setup_devs 00:14:02.589 15:01:18 -- setup/acl.sh@16 -- # local dev driver 00:14:02.589 15:01:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:02.589 15:01:18 -- setup/acl.sh@15 -- # setup output status 00:14:02.589 15:01:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:02.589 15:01:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:03.525 15:01:18 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:14:03.525 15:01:18 -- setup/acl.sh@19 -- # continue 00:14:03.525 15:01:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:03.525 Hugepages 00:14:03.525 node hugesize free / total 00:14:03.525 15:01:18 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:14:03.525 15:01:18 -- setup/acl.sh@19 -- # continue 00:14:03.525 15:01:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:03.525 00:14:03.525 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:03.525 15:01:18 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:14:03.525 15:01:18 -- setup/acl.sh@19 -- # continue 00:14:03.525 15:01:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:03.525 15:01:19 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:14:03.525 15:01:19 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:14:03.525 15:01:19 -- setup/acl.sh@20 -- # continue 00:14:03.525 15:01:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:03.525 15:01:19 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:14:03.525 15:01:19 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:03.525 15:01:19 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:14:03.525 15:01:19 -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:03.525 15:01:19 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:03.525 15:01:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:03.785 15:01:19 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:14:03.785 15:01:19 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:03.785 15:01:19 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:03.785 15:01:19 -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:03.785 15:01:19 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:03.785 15:01:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:03.785 15:01:19 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:14:03.785 15:01:19 -- setup/acl.sh@54 -- # run_test denied denied 00:14:03.785 15:01:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:03.785 15:01:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:03.785 15:01:19 -- common/autotest_common.sh@10 -- # set +x 00:14:03.785 ************************************ 00:14:03.785 START TEST denied 00:14:03.785 ************************************ 00:14:03.785 15:01:19 -- common/autotest_common.sh@1111 -- # denied 00:14:03.785 15:01:19 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:14:03.785 15:01:19 -- setup/acl.sh@38 -- # setup output config 00:14:03.785 15:01:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:03.785 15:01:19 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:14:03.785 15:01:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:04.754 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:14:04.754 15:01:20 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:14:04.754 15:01:20 -- setup/acl.sh@28 -- # local dev driver 00:14:04.754 15:01:20 -- setup/acl.sh@30 -- # for dev in "$@" 00:14:04.754 15:01:20 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:14:04.754 15:01:20 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:14:04.754 15:01:20 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:04.755 15:01:20 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:04.755 15:01:20 -- setup/acl.sh@41 -- # setup reset 00:14:04.755 15:01:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:04.755 15:01:20 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:05.692 00:14:05.692 real 0m1.879s 00:14:05.692 user 0m0.667s 00:14:05.692 sys 0m1.177s 00:14:05.692 15:01:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:05.692 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:14:05.692 ************************************ 00:14:05.692 END TEST denied 00:14:05.692 ************************************ 00:14:05.692 15:01:21 -- setup/acl.sh@55 -- # run_test allowed allowed 00:14:05.692 15:01:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:05.692 15:01:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:05.692 15:01:21 -- common/autotest_common.sh@10 -- # set +x 00:14:05.692 ************************************ 00:14:05.692 START TEST allowed 00:14:05.692 ************************************ 00:14:05.692 15:01:21 -- common/autotest_common.sh@1111 -- # allowed 00:14:05.692 15:01:21 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:14:05.692 15:01:21 -- setup/acl.sh@45 -- # setup output config 00:14:05.692 15:01:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:05.692 15:01:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:05.692 15:01:21 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:14:06.628 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:06.628 15:01:22 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:14:06.628 15:01:22 -- setup/acl.sh@28 -- # local dev driver 00:14:06.628 15:01:22 -- setup/acl.sh@30 -- # for dev in "$@" 00:14:06.628 15:01:22 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:14:06.628 15:01:22 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:14:06.628 15:01:22 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:06.628 15:01:22 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:06.628 15:01:22 -- setup/acl.sh@48 -- # setup reset 00:14:06.628 15:01:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:06.628 15:01:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:07.566 00:14:07.566 real 0m1.825s 00:14:07.566 user 0m0.741s 00:14:07.566 sys 0m1.084s 00:14:07.566 15:01:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:07.566 ************************************ 00:14:07.566 END TEST allowed 00:14:07.566 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:14:07.566 ************************************ 00:14:07.566 00:14:07.566 real 0m6.217s 00:14:07.566 user 0m2.423s 00:14:07.566 sys 0m3.762s 00:14:07.566 15:01:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:07.566 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:14:07.566 ************************************ 00:14:07.566 END TEST acl 00:14:07.566 ************************************ 00:14:07.827 15:01:23 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:14:07.827 15:01:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:07.827 15:01:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:07.827 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:14:07.827 ************************************ 00:14:07.827 START TEST hugepages 00:14:07.827 ************************************ 00:14:07.827 15:01:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:14:07.827 * Looking for test storage... 00:14:07.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:07.827 15:01:23 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:14:07.827 15:01:23 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:14:07.827 15:01:23 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:14:07.827 15:01:23 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:14:07.827 15:01:23 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:14:07.827 15:01:23 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:14:07.827 15:01:23 -- setup/common.sh@17 -- # local get=Hugepagesize 00:14:07.827 15:01:23 -- setup/common.sh@18 -- # local node= 00:14:07.827 15:01:23 -- setup/common.sh@19 -- # local var val 00:14:07.827 15:01:23 -- setup/common.sh@20 -- # local mem_f mem 00:14:07.827 15:01:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:07.827 15:01:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:07.827 15:01:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:07.827 15:01:23 -- setup/common.sh@28 -- # mapfile -t mem 00:14:07.827 15:01:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5469672 kB' 'MemAvailable: 7395640 kB' 'Buffers: 2436 kB' 'Cached: 2135748 kB' 'SwapCached: 0 kB' 'Active: 887816 kB' 'Inactive: 1368492 kB' 'Active(anon): 128612 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368492 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 119976 kB' 'Mapped: 48820 kB' 'Shmem: 10488 kB' 'KReclaimable: 70424 kB' 'Slab: 148040 kB' 'SReclaimable: 70424 kB' 'SUnreclaim: 77616 kB' 'KernelStack: 6448 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 354196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.827 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.827 15:01:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # continue 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # IFS=': ' 00:14:07.828 15:01:23 -- setup/common.sh@31 -- # read -r var val _ 00:14:07.828 15:01:23 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:07.828 15:01:23 -- setup/common.sh@33 -- # echo 2048 00:14:07.828 15:01:23 -- setup/common.sh@33 -- # return 0 00:14:07.828 15:01:23 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:14:07.828 15:01:23 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:14:07.828 15:01:23 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:14:07.828 15:01:23 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:14:07.828 15:01:23 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:14:07.828 15:01:23 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:14:07.828 15:01:23 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:14:07.828 15:01:23 -- setup/hugepages.sh@207 -- # get_nodes 00:14:07.828 15:01:23 -- setup/hugepages.sh@27 -- # local node 00:14:07.828 15:01:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:07.828 15:01:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:14:07.828 15:01:23 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:07.828 15:01:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:07.828 15:01:23 -- setup/hugepages.sh@208 -- # clear_hp 00:14:07.828 15:01:23 -- setup/hugepages.sh@37 -- # local node hp 00:14:07.828 15:01:23 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:14:07.828 15:01:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:07.828 15:01:23 -- setup/hugepages.sh@41 -- # echo 0 00:14:07.828 15:01:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:07.828 15:01:23 -- setup/hugepages.sh@41 -- # echo 0 00:14:07.828 15:01:23 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:14:07.828 15:01:23 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:14:07.828 15:01:23 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:14:07.828 15:01:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:07.828 15:01:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:07.828 15:01:23 -- common/autotest_common.sh@10 -- # set +x 00:14:08.135 ************************************ 00:14:08.135 START TEST default_setup 00:14:08.135 ************************************ 00:14:08.135 15:01:23 -- common/autotest_common.sh@1111 -- # default_setup 00:14:08.135 15:01:23 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:14:08.135 15:01:23 -- setup/hugepages.sh@49 -- # local size=2097152 00:14:08.135 15:01:23 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:08.135 15:01:23 -- setup/hugepages.sh@51 -- # shift 00:14:08.135 15:01:23 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:08.135 15:01:23 -- setup/hugepages.sh@52 -- # local node_ids 00:14:08.135 15:01:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:08.135 15:01:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:08.135 15:01:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:08.135 15:01:23 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:08.135 15:01:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:08.135 15:01:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:08.135 15:01:23 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:08.135 15:01:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:08.135 15:01:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:08.135 15:01:23 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:08.135 15:01:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:08.135 15:01:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:14:08.135 15:01:23 -- setup/hugepages.sh@73 -- # return 0 00:14:08.135 15:01:23 -- setup/hugepages.sh@137 -- # setup output 00:14:08.135 15:01:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:08.135 15:01:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:08.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:08.967 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.967 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.967 15:01:24 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:14:08.967 15:01:24 -- setup/hugepages.sh@89 -- # local node 00:14:08.967 15:01:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:08.967 15:01:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:08.967 15:01:24 -- setup/hugepages.sh@92 -- # local surp 00:14:08.967 15:01:24 -- setup/hugepages.sh@93 -- # local resv 00:14:08.967 15:01:24 -- setup/hugepages.sh@94 -- # local anon 00:14:08.967 15:01:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:08.967 15:01:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:08.967 15:01:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:08.967 15:01:24 -- setup/common.sh@18 -- # local node= 00:14:08.967 15:01:24 -- setup/common.sh@19 -- # local var val 00:14:08.967 15:01:24 -- setup/common.sh@20 -- # local mem_f mem 00:14:08.967 15:01:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:08.967 15:01:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:08.967 15:01:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:08.967 15:01:24 -- setup/common.sh@28 -- # mapfile -t mem 00:14:08.967 15:01:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:08.967 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.967 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.967 15:01:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7575984 kB' 'MemAvailable: 9501828 kB' 'Buffers: 2436 kB' 'Cached: 2135744 kB' 'SwapCached: 0 kB' 'Active: 893456 kB' 'Inactive: 1368504 kB' 'Active(anon): 134252 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368504 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 856 kB' 'Writeback: 0 kB' 'AnonPages: 125336 kB' 'Mapped: 48948 kB' 'Shmem: 10464 kB' 'KReclaimable: 70148 kB' 'Slab: 147672 kB' 'SReclaimable: 70148 kB' 'SUnreclaim: 77524 kB' 'KernelStack: 6392 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:08.967 15:01:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.967 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.968 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.968 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:08.969 15:01:24 -- setup/common.sh@33 -- # echo 0 00:14:08.969 15:01:24 -- setup/common.sh@33 -- # return 0 00:14:08.969 15:01:24 -- setup/hugepages.sh@97 -- # anon=0 00:14:08.969 15:01:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:08.969 15:01:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:08.969 15:01:24 -- setup/common.sh@18 -- # local node= 00:14:08.969 15:01:24 -- setup/common.sh@19 -- # local var val 00:14:08.969 15:01:24 -- setup/common.sh@20 -- # local mem_f mem 00:14:08.969 15:01:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:08.969 15:01:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:08.969 15:01:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:08.969 15:01:24 -- setup/common.sh@28 -- # mapfile -t mem 00:14:08.969 15:01:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:08.969 15:01:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7575984 kB' 'MemAvailable: 9501828 kB' 'Buffers: 2436 kB' 'Cached: 2135744 kB' 'SwapCached: 0 kB' 'Active: 893180 kB' 'Inactive: 1368504 kB' 'Active(anon): 133976 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368504 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 856 kB' 'Writeback: 0 kB' 'AnonPages: 125096 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 70148 kB' 'Slab: 147636 kB' 'SReclaimable: 70148 kB' 'SUnreclaim: 77488 kB' 'KernelStack: 6392 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.969 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.969 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # continue 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:08.970 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:08.970 15:01:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:08.970 15:01:24 -- setup/common.sh@33 -- # echo 0 00:14:08.970 15:01:24 -- setup/common.sh@33 -- # return 0 00:14:08.970 15:01:24 -- setup/hugepages.sh@99 -- # surp=0 00:14:08.970 15:01:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:08.970 15:01:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:08.970 15:01:24 -- setup/common.sh@18 -- # local node= 00:14:08.970 15:01:24 -- setup/common.sh@19 -- # local var val 00:14:08.970 15:01:24 -- setup/common.sh@20 -- # local mem_f mem 00:14:08.970 15:01:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:08.970 15:01:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:08.970 15:01:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:08.970 15:01:24 -- setup/common.sh@28 -- # mapfile -t mem 00:14:09.233 15:01:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7575984 kB' 'MemAvailable: 9501828 kB' 'Buffers: 2436 kB' 'Cached: 2135744 kB' 'SwapCached: 0 kB' 'Active: 893328 kB' 'Inactive: 1368504 kB' 'Active(anon): 134124 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368504 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 856 kB' 'Writeback: 0 kB' 'AnonPages: 125232 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 70148 kB' 'Slab: 147636 kB' 'SReclaimable: 70148 kB' 'SUnreclaim: 77488 kB' 'KernelStack: 6392 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.233 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.233 15:01:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:09.234 15:01:24 -- setup/common.sh@33 -- # echo 0 00:14:09.234 15:01:24 -- setup/common.sh@33 -- # return 0 00:14:09.234 nr_hugepages=1024 00:14:09.234 resv_hugepages=0 00:14:09.234 surplus_hugepages=0 00:14:09.234 anon_hugepages=0 00:14:09.234 15:01:24 -- setup/hugepages.sh@100 -- # resv=0 00:14:09.234 15:01:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:09.234 15:01:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:09.234 15:01:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:09.234 15:01:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:09.234 15:01:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:09.234 15:01:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:09.234 15:01:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:09.234 15:01:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:09.234 15:01:24 -- setup/common.sh@18 -- # local node= 00:14:09.234 15:01:24 -- setup/common.sh@19 -- # local var val 00:14:09.234 15:01:24 -- setup/common.sh@20 -- # local mem_f mem 00:14:09.234 15:01:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:09.234 15:01:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:09.234 15:01:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:09.234 15:01:24 -- setup/common.sh@28 -- # mapfile -t mem 00:14:09.234 15:01:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7575984 kB' 'MemAvailable: 9501828 kB' 'Buffers: 2436 kB' 'Cached: 2135744 kB' 'SwapCached: 0 kB' 'Active: 893128 kB' 'Inactive: 1368504 kB' 'Active(anon): 133924 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368504 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 856 kB' 'Writeback: 0 kB' 'AnonPages: 125040 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 70148 kB' 'Slab: 147636 kB' 'SReclaimable: 70148 kB' 'SUnreclaim: 77488 kB' 'KernelStack: 6392 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.234 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.234 15:01:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.235 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.235 15:01:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:09.236 15:01:24 -- setup/common.sh@33 -- # echo 1024 00:14:09.236 15:01:24 -- setup/common.sh@33 -- # return 0 00:14:09.236 15:01:24 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:09.236 15:01:24 -- setup/hugepages.sh@112 -- # get_nodes 00:14:09.236 15:01:24 -- setup/hugepages.sh@27 -- # local node 00:14:09.236 15:01:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:09.236 15:01:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:09.236 15:01:24 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:09.236 15:01:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:09.236 15:01:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:09.236 15:01:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:09.236 15:01:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:09.236 15:01:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:09.236 15:01:24 -- setup/common.sh@18 -- # local node=0 00:14:09.236 15:01:24 -- setup/common.sh@19 -- # local var val 00:14:09.236 15:01:24 -- setup/common.sh@20 -- # local mem_f mem 00:14:09.236 15:01:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:09.236 15:01:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:09.236 15:01:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:09.236 15:01:24 -- setup/common.sh@28 -- # mapfile -t mem 00:14:09.236 15:01:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:09.236 15:01:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7575984 kB' 'MemUsed: 4665996 kB' 'SwapCached: 0 kB' 'Active: 893128 kB' 'Inactive: 1368504 kB' 'Active(anon): 133924 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368504 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 856 kB' 'Writeback: 0 kB' 'FilePages: 2138180 kB' 'Mapped: 48832 kB' 'AnonPages: 125300 kB' 'Shmem: 10464 kB' 'KernelStack: 6392 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70148 kB' 'Slab: 147636 kB' 'SReclaimable: 70148 kB' 'SUnreclaim: 77488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.236 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.236 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # continue 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.237 15:01:24 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.237 15:01:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:09.237 15:01:24 -- setup/common.sh@33 -- # echo 0 00:14:09.237 15:01:24 -- setup/common.sh@33 -- # return 0 00:14:09.237 node0=1024 expecting 1024 00:14:09.237 15:01:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:09.237 15:01:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:09.237 15:01:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:09.237 15:01:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:09.237 15:01:24 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:09.237 15:01:24 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:09.237 00:14:09.237 real 0m1.139s 00:14:09.237 user 0m0.460s 00:14:09.237 sys 0m0.634s 00:14:09.237 ************************************ 00:14:09.237 END TEST default_setup 00:14:09.237 ************************************ 00:14:09.237 15:01:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:09.237 15:01:24 -- common/autotest_common.sh@10 -- # set +x 00:14:09.237 15:01:24 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:14:09.237 15:01:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:09.237 15:01:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:09.237 15:01:24 -- common/autotest_common.sh@10 -- # set +x 00:14:09.237 ************************************ 00:14:09.237 START TEST per_node_1G_alloc 00:14:09.237 ************************************ 00:14:09.237 15:01:24 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:14:09.237 15:01:24 -- setup/hugepages.sh@143 -- # local IFS=, 00:14:09.237 15:01:24 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:14:09.237 15:01:24 -- setup/hugepages.sh@49 -- # local size=1048576 00:14:09.237 15:01:24 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:09.237 15:01:24 -- setup/hugepages.sh@51 -- # shift 00:14:09.237 15:01:24 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:09.237 15:01:24 -- setup/hugepages.sh@52 -- # local node_ids 00:14:09.237 15:01:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:09.237 15:01:24 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:14:09.237 15:01:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:09.237 15:01:24 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:09.237 15:01:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:09.237 15:01:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:09.237 15:01:24 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:09.237 15:01:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:09.237 15:01:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:09.237 15:01:24 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:09.237 15:01:24 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:09.237 15:01:24 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:14:09.237 15:01:24 -- setup/hugepages.sh@73 -- # return 0 00:14:09.237 15:01:24 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:14:09.237 15:01:24 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:14:09.237 15:01:24 -- setup/hugepages.sh@146 -- # setup output 00:14:09.237 15:01:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:09.237 15:01:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:09.806 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:09.806 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:09.806 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:09.806 15:01:25 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:14:09.806 15:01:25 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:14:09.806 15:01:25 -- setup/hugepages.sh@89 -- # local node 00:14:09.806 15:01:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:09.806 15:01:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:09.807 15:01:25 -- setup/hugepages.sh@92 -- # local surp 00:14:09.807 15:01:25 -- setup/hugepages.sh@93 -- # local resv 00:14:09.807 15:01:25 -- setup/hugepages.sh@94 -- # local anon 00:14:09.807 15:01:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:09.807 15:01:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:09.807 15:01:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:09.807 15:01:25 -- setup/common.sh@18 -- # local node= 00:14:09.807 15:01:25 -- setup/common.sh@19 -- # local var val 00:14:09.807 15:01:25 -- setup/common.sh@20 -- # local mem_f mem 00:14:09.807 15:01:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:09.807 15:01:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:09.807 15:01:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:09.807 15:01:25 -- setup/common.sh@28 -- # mapfile -t mem 00:14:09.807 15:01:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8625704 kB' 'MemAvailable: 10551540 kB' 'Buffers: 2436 kB' 'Cached: 2135752 kB' 'SwapCached: 0 kB' 'Active: 893280 kB' 'Inactive: 1368520 kB' 'Active(anon): 134076 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1008 kB' 'Writeback: 0 kB' 'AnonPages: 125232 kB' 'Mapped: 49012 kB' 'Shmem: 10464 kB' 'KReclaimable: 70100 kB' 'Slab: 147596 kB' 'SReclaimable: 70100 kB' 'SUnreclaim: 77496 kB' 'KernelStack: 6356 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55156 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.807 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.807 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # continue 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:09.808 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:09.808 15:01:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:09.808 15:01:25 -- setup/common.sh@33 -- # echo 0 00:14:09.808 15:01:25 -- setup/common.sh@33 -- # return 0 00:14:10.071 15:01:25 -- setup/hugepages.sh@97 -- # anon=0 00:14:10.071 15:01:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:10.071 15:01:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:10.071 15:01:25 -- setup/common.sh@18 -- # local node= 00:14:10.071 15:01:25 -- setup/common.sh@19 -- # local var val 00:14:10.071 15:01:25 -- setup/common.sh@20 -- # local mem_f mem 00:14:10.071 15:01:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:10.071 15:01:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:10.071 15:01:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:10.071 15:01:25 -- setup/common.sh@28 -- # mapfile -t mem 00:14:10.071 15:01:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.071 15:01:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8625704 kB' 'MemAvailable: 10551540 kB' 'Buffers: 2436 kB' 'Cached: 2135752 kB' 'SwapCached: 0 kB' 'Active: 893056 kB' 'Inactive: 1368520 kB' 'Active(anon): 133852 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1008 kB' 'Writeback: 0 kB' 'AnonPages: 125272 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 70100 kB' 'Slab: 147596 kB' 'SReclaimable: 70100 kB' 'SUnreclaim: 77496 kB' 'KernelStack: 6416 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.071 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.071 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.072 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.072 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.073 15:01:25 -- setup/common.sh@33 -- # echo 0 00:14:10.073 15:01:25 -- setup/common.sh@33 -- # return 0 00:14:10.073 15:01:25 -- setup/hugepages.sh@99 -- # surp=0 00:14:10.073 15:01:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:10.073 15:01:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:10.073 15:01:25 -- setup/common.sh@18 -- # local node= 00:14:10.073 15:01:25 -- setup/common.sh@19 -- # local var val 00:14:10.073 15:01:25 -- setup/common.sh@20 -- # local mem_f mem 00:14:10.073 15:01:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:10.073 15:01:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:10.073 15:01:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:10.073 15:01:25 -- setup/common.sh@28 -- # mapfile -t mem 00:14:10.073 15:01:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8625756 kB' 'MemAvailable: 10551592 kB' 'Buffers: 2436 kB' 'Cached: 2135752 kB' 'SwapCached: 0 kB' 'Active: 892980 kB' 'Inactive: 1368520 kB' 'Active(anon): 133776 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1008 kB' 'Writeback: 0 kB' 'AnonPages: 125164 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 70100 kB' 'Slab: 147596 kB' 'SReclaimable: 70100 kB' 'SUnreclaim: 77496 kB' 'KernelStack: 6384 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.073 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.073 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.074 15:01:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.074 15:01:25 -- setup/common.sh@33 -- # echo 0 00:14:10.074 15:01:25 -- setup/common.sh@33 -- # return 0 00:14:10.074 nr_hugepages=512 00:14:10.074 resv_hugepages=0 00:14:10.074 surplus_hugepages=0 00:14:10.074 anon_hugepages=0 00:14:10.074 15:01:25 -- setup/hugepages.sh@100 -- # resv=0 00:14:10.074 15:01:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:14:10.074 15:01:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:10.074 15:01:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:10.074 15:01:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:10.074 15:01:25 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:10.074 15:01:25 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:14:10.074 15:01:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:10.074 15:01:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:10.074 15:01:25 -- setup/common.sh@18 -- # local node= 00:14:10.074 15:01:25 -- setup/common.sh@19 -- # local var val 00:14:10.074 15:01:25 -- setup/common.sh@20 -- # local mem_f mem 00:14:10.074 15:01:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:10.074 15:01:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:10.074 15:01:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:10.074 15:01:25 -- setup/common.sh@28 -- # mapfile -t mem 00:14:10.074 15:01:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.074 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8625756 kB' 'MemAvailable: 10551592 kB' 'Buffers: 2436 kB' 'Cached: 2135752 kB' 'SwapCached: 0 kB' 'Active: 893236 kB' 'Inactive: 1368520 kB' 'Active(anon): 134032 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1008 kB' 'Writeback: 0 kB' 'AnonPages: 125184 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 70100 kB' 'Slab: 147596 kB' 'SReclaimable: 70100 kB' 'SUnreclaim: 77496 kB' 'KernelStack: 6384 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.075 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.075 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.076 15:01:25 -- setup/common.sh@33 -- # echo 512 00:14:10.076 15:01:25 -- setup/common.sh@33 -- # return 0 00:14:10.076 15:01:25 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:10.076 15:01:25 -- setup/hugepages.sh@112 -- # get_nodes 00:14:10.076 15:01:25 -- setup/hugepages.sh@27 -- # local node 00:14:10.076 15:01:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:10.076 15:01:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:14:10.076 15:01:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:10.076 15:01:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:10.076 15:01:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:10.076 15:01:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:10.076 15:01:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:10.076 15:01:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:10.076 15:01:25 -- setup/common.sh@18 -- # local node=0 00:14:10.076 15:01:25 -- setup/common.sh@19 -- # local var val 00:14:10.076 15:01:25 -- setup/common.sh@20 -- # local mem_f mem 00:14:10.076 15:01:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:10.076 15:01:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:10.076 15:01:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:10.076 15:01:25 -- setup/common.sh@28 -- # mapfile -t mem 00:14:10.076 15:01:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8625756 kB' 'MemUsed: 3616224 kB' 'SwapCached: 0 kB' 'Active: 893012 kB' 'Inactive: 1368520 kB' 'Active(anon): 133808 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1008 kB' 'Writeback: 0 kB' 'FilePages: 2138188 kB' 'Mapped: 48840 kB' 'AnonPages: 125164 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70100 kB' 'Slab: 147596 kB' 'SReclaimable: 70100 kB' 'SUnreclaim: 77496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.076 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.076 15:01:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # continue 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.077 15:01:25 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.077 15:01:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.077 15:01:25 -- setup/common.sh@33 -- # echo 0 00:14:10.077 15:01:25 -- setup/common.sh@33 -- # return 0 00:14:10.077 15:01:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:10.077 15:01:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:10.077 15:01:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:10.077 15:01:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:10.077 node0=512 expecting 512 00:14:10.077 15:01:25 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:14:10.077 15:01:25 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:14:10.077 00:14:10.077 real 0m0.767s 00:14:10.077 user 0m0.369s 00:14:10.077 sys 0m0.418s 00:14:10.077 15:01:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:10.078 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:14:10.078 ************************************ 00:14:10.078 END TEST per_node_1G_alloc 00:14:10.078 ************************************ 00:14:10.078 15:01:25 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:14:10.078 15:01:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:10.078 15:01:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:10.078 15:01:25 -- common/autotest_common.sh@10 -- # set +x 00:14:10.337 ************************************ 00:14:10.337 START TEST even_2G_alloc 00:14:10.337 ************************************ 00:14:10.337 15:01:25 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:14:10.337 15:01:25 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:14:10.337 15:01:25 -- setup/hugepages.sh@49 -- # local size=2097152 00:14:10.337 15:01:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:10.337 15:01:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:10.338 15:01:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:10.338 15:01:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:10.338 15:01:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:10.338 15:01:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:10.338 15:01:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:10.338 15:01:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:10.338 15:01:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:10.338 15:01:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:10.338 15:01:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:10.338 15:01:25 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:10.338 15:01:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:10.338 15:01:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:14:10.338 15:01:25 -- setup/hugepages.sh@83 -- # : 0 00:14:10.338 15:01:25 -- setup/hugepages.sh@84 -- # : 0 00:14:10.338 15:01:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:10.338 15:01:25 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:14:10.338 15:01:25 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:14:10.338 15:01:25 -- setup/hugepages.sh@153 -- # setup output 00:14:10.338 15:01:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:10.338 15:01:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:10.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:10.908 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:10.908 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:10.908 15:01:26 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:14:10.909 15:01:26 -- setup/hugepages.sh@89 -- # local node 00:14:10.909 15:01:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:10.909 15:01:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:10.909 15:01:26 -- setup/hugepages.sh@92 -- # local surp 00:14:10.909 15:01:26 -- setup/hugepages.sh@93 -- # local resv 00:14:10.909 15:01:26 -- setup/hugepages.sh@94 -- # local anon 00:14:10.909 15:01:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:10.909 15:01:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:10.909 15:01:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:10.909 15:01:26 -- setup/common.sh@18 -- # local node= 00:14:10.909 15:01:26 -- setup/common.sh@19 -- # local var val 00:14:10.909 15:01:26 -- setup/common.sh@20 -- # local mem_f mem 00:14:10.909 15:01:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:10.909 15:01:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:10.909 15:01:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:10.909 15:01:26 -- setup/common.sh@28 -- # mapfile -t mem 00:14:10.909 15:01:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7589448 kB' 'MemAvailable: 9515284 kB' 'Buffers: 2436 kB' 'Cached: 2135752 kB' 'SwapCached: 0 kB' 'Active: 893696 kB' 'Inactive: 1368520 kB' 'Active(anon): 134492 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1180 kB' 'Writeback: 0 kB' 'AnonPages: 125376 kB' 'Mapped: 48992 kB' 'Shmem: 10464 kB' 'KReclaimable: 70100 kB' 'Slab: 147484 kB' 'SReclaimable: 70100 kB' 'SUnreclaim: 77384 kB' 'KernelStack: 6372 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.909 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.909 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:10.910 15:01:26 -- setup/common.sh@33 -- # echo 0 00:14:10.910 15:01:26 -- setup/common.sh@33 -- # return 0 00:14:10.910 15:01:26 -- setup/hugepages.sh@97 -- # anon=0 00:14:10.910 15:01:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:10.910 15:01:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:10.910 15:01:26 -- setup/common.sh@18 -- # local node= 00:14:10.910 15:01:26 -- setup/common.sh@19 -- # local var val 00:14:10.910 15:01:26 -- setup/common.sh@20 -- # local mem_f mem 00:14:10.910 15:01:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:10.910 15:01:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:10.910 15:01:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:10.910 15:01:26 -- setup/common.sh@28 -- # mapfile -t mem 00:14:10.910 15:01:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:10.910 15:01:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7589448 kB' 'MemAvailable: 9515284 kB' 'Buffers: 2436 kB' 'Cached: 2135752 kB' 'SwapCached: 0 kB' 'Active: 893272 kB' 'Inactive: 1368520 kB' 'Active(anon): 134068 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1180 kB' 'Writeback: 0 kB' 'AnonPages: 125216 kB' 'Mapped: 48992 kB' 'Shmem: 10464 kB' 'KReclaimable: 70100 kB' 'Slab: 147484 kB' 'SReclaimable: 70100 kB' 'SUnreclaim: 77384 kB' 'KernelStack: 6356 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.910 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.910 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.911 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.911 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.912 15:01:26 -- setup/common.sh@33 -- # echo 0 00:14:10.912 15:01:26 -- setup/common.sh@33 -- # return 0 00:14:10.912 15:01:26 -- setup/hugepages.sh@99 -- # surp=0 00:14:10.912 15:01:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:10.912 15:01:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:10.912 15:01:26 -- setup/common.sh@18 -- # local node= 00:14:10.912 15:01:26 -- setup/common.sh@19 -- # local var val 00:14:10.912 15:01:26 -- setup/common.sh@20 -- # local mem_f mem 00:14:10.912 15:01:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:10.912 15:01:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:10.912 15:01:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:10.912 15:01:26 -- setup/common.sh@28 -- # mapfile -t mem 00:14:10.912 15:01:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7589448 kB' 'MemAvailable: 9515284 kB' 'Buffers: 2436 kB' 'Cached: 2135752 kB' 'SwapCached: 0 kB' 'Active: 893120 kB' 'Inactive: 1368520 kB' 'Active(anon): 133916 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1180 kB' 'Writeback: 0 kB' 'AnonPages: 125044 kB' 'Mapped: 48992 kB' 'Shmem: 10464 kB' 'KReclaimable: 70100 kB' 'Slab: 147484 kB' 'SReclaimable: 70100 kB' 'SUnreclaim: 77384 kB' 'KernelStack: 6340 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.912 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.912 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.913 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.913 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:10.914 15:01:26 -- setup/common.sh@33 -- # echo 0 00:14:10.914 15:01:26 -- setup/common.sh@33 -- # return 0 00:14:10.914 15:01:26 -- setup/hugepages.sh@100 -- # resv=0 00:14:10.914 15:01:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:10.914 nr_hugepages=1024 00:14:10.914 resv_hugepages=0 00:14:10.914 15:01:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:10.914 surplus_hugepages=0 00:14:10.914 15:01:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:10.914 anon_hugepages=0 00:14:10.914 15:01:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:10.914 15:01:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:10.914 15:01:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:10.914 15:01:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:10.914 15:01:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:10.914 15:01:26 -- setup/common.sh@18 -- # local node= 00:14:10.914 15:01:26 -- setup/common.sh@19 -- # local var val 00:14:10.914 15:01:26 -- setup/common.sh@20 -- # local mem_f mem 00:14:10.914 15:01:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:10.914 15:01:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:10.914 15:01:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:10.914 15:01:26 -- setup/common.sh@28 -- # mapfile -t mem 00:14:10.914 15:01:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7589448 kB' 'MemAvailable: 9515284 kB' 'Buffers: 2436 kB' 'Cached: 2135752 kB' 'SwapCached: 0 kB' 'Active: 893320 kB' 'Inactive: 1368520 kB' 'Active(anon): 134116 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1180 kB' 'Writeback: 0 kB' 'AnonPages: 125244 kB' 'Mapped: 48860 kB' 'Shmem: 10464 kB' 'KReclaimable: 70100 kB' 'Slab: 147480 kB' 'SReclaimable: 70100 kB' 'SUnreclaim: 77380 kB' 'KernelStack: 6400 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.914 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.914 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.915 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.915 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:10.916 15:01:26 -- setup/common.sh@33 -- # echo 1024 00:14:10.916 15:01:26 -- setup/common.sh@33 -- # return 0 00:14:10.916 15:01:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:10.916 15:01:26 -- setup/hugepages.sh@112 -- # get_nodes 00:14:10.916 15:01:26 -- setup/hugepages.sh@27 -- # local node 00:14:10.916 15:01:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:10.916 15:01:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:10.916 15:01:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:10.916 15:01:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:10.916 15:01:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:10.916 15:01:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:10.916 15:01:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:10.916 15:01:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:10.916 15:01:26 -- setup/common.sh@18 -- # local node=0 00:14:10.916 15:01:26 -- setup/common.sh@19 -- # local var val 00:14:10.916 15:01:26 -- setup/common.sh@20 -- # local mem_f mem 00:14:10.916 15:01:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:10.916 15:01:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:10.916 15:01:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:10.916 15:01:26 -- setup/common.sh@28 -- # mapfile -t mem 00:14:10.916 15:01:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:10.916 15:01:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7589448 kB' 'MemUsed: 4652532 kB' 'SwapCached: 0 kB' 'Active: 893264 kB' 'Inactive: 1368520 kB' 'Active(anon): 134060 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1180 kB' 'Writeback: 0 kB' 'FilePages: 2138188 kB' 'Mapped: 48860 kB' 'AnonPages: 125192 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70100 kB' 'Slab: 147480 kB' 'SReclaimable: 70100 kB' 'SUnreclaim: 77380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.916 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.916 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # continue 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # IFS=': ' 00:14:10.917 15:01:26 -- setup/common.sh@31 -- # read -r var val _ 00:14:10.917 15:01:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:10.917 15:01:26 -- setup/common.sh@33 -- # echo 0 00:14:10.917 15:01:26 -- setup/common.sh@33 -- # return 0 00:14:10.917 15:01:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:10.917 15:01:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:10.917 15:01:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:10.917 15:01:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:10.917 15:01:26 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:10.917 node0=1024 expecting 1024 00:14:10.917 15:01:26 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:10.917 00:14:10.917 real 0m0.729s 00:14:10.917 user 0m0.316s 00:14:10.917 sys 0m0.461s 00:14:10.917 15:01:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:10.917 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:14:10.917 ************************************ 00:14:10.917 END TEST even_2G_alloc 00:14:10.917 ************************************ 00:14:11.177 15:01:26 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:14:11.177 15:01:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:11.177 15:01:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:11.177 15:01:26 -- common/autotest_common.sh@10 -- # set +x 00:14:11.177 ************************************ 00:14:11.177 START TEST odd_alloc 00:14:11.177 ************************************ 00:14:11.177 15:01:26 -- common/autotest_common.sh@1111 -- # odd_alloc 00:14:11.177 15:01:26 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:14:11.177 15:01:26 -- setup/hugepages.sh@49 -- # local size=2098176 00:14:11.177 15:01:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:11.177 15:01:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:11.177 15:01:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:14:11.177 15:01:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:11.177 15:01:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:11.177 15:01:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:11.177 15:01:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:14:11.177 15:01:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:11.177 15:01:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:11.177 15:01:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:11.177 15:01:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:11.177 15:01:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:11.177 15:01:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:11.177 15:01:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:14:11.177 15:01:26 -- setup/hugepages.sh@83 -- # : 0 00:14:11.177 15:01:26 -- setup/hugepages.sh@84 -- # : 0 00:14:11.177 15:01:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:11.177 15:01:26 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:14:11.177 15:01:26 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:14:11.177 15:01:26 -- setup/hugepages.sh@160 -- # setup output 00:14:11.177 15:01:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:11.177 15:01:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:11.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:11.748 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:11.748 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:11.748 15:01:27 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:14:11.748 15:01:27 -- setup/hugepages.sh@89 -- # local node 00:14:11.748 15:01:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:11.748 15:01:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:11.748 15:01:27 -- setup/hugepages.sh@92 -- # local surp 00:14:11.748 15:01:27 -- setup/hugepages.sh@93 -- # local resv 00:14:11.748 15:01:27 -- setup/hugepages.sh@94 -- # local anon 00:14:11.748 15:01:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:11.748 15:01:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:11.748 15:01:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:11.748 15:01:27 -- setup/common.sh@18 -- # local node= 00:14:11.748 15:01:27 -- setup/common.sh@19 -- # local var val 00:14:11.748 15:01:27 -- setup/common.sh@20 -- # local mem_f mem 00:14:11.748 15:01:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:11.748 15:01:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:11.748 15:01:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:11.748 15:01:27 -- setup/common.sh@28 -- # mapfile -t mem 00:14:11.749 15:01:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7586564 kB' 'MemAvailable: 9512436 kB' 'Buffers: 2436 kB' 'Cached: 2135788 kB' 'SwapCached: 0 kB' 'Active: 893260 kB' 'Inactive: 1368556 kB' 'Active(anon): 134056 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1316 kB' 'Writeback: 0 kB' 'AnonPages: 125152 kB' 'Mapped: 48988 kB' 'Shmem: 10464 kB' 'KReclaimable: 70100 kB' 'Slab: 147456 kB' 'SReclaimable: 70100 kB' 'SUnreclaim: 77356 kB' 'KernelStack: 6356 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.749 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.749 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:11.750 15:01:27 -- setup/common.sh@33 -- # echo 0 00:14:11.750 15:01:27 -- setup/common.sh@33 -- # return 0 00:14:11.750 15:01:27 -- setup/hugepages.sh@97 -- # anon=0 00:14:11.750 15:01:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:11.750 15:01:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:11.750 15:01:27 -- setup/common.sh@18 -- # local node= 00:14:11.750 15:01:27 -- setup/common.sh@19 -- # local var val 00:14:11.750 15:01:27 -- setup/common.sh@20 -- # local mem_f mem 00:14:11.750 15:01:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:11.750 15:01:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:11.750 15:01:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:11.750 15:01:27 -- setup/common.sh@28 -- # mapfile -t mem 00:14:11.750 15:01:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7586564 kB' 'MemAvailable: 9512436 kB' 'Buffers: 2436 kB' 'Cached: 2135788 kB' 'SwapCached: 0 kB' 'Active: 893412 kB' 'Inactive: 1368556 kB' 'Active(anon): 134208 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368556 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1316 kB' 'Writeback: 0 kB' 'AnonPages: 125412 kB' 'Mapped: 48988 kB' 'Shmem: 10464 kB' 'KReclaimable: 70100 kB' 'Slab: 147452 kB' 'SReclaimable: 70100 kB' 'SUnreclaim: 77352 kB' 'KernelStack: 6388 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.750 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.750 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.751 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.751 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.752 15:01:27 -- setup/common.sh@33 -- # echo 0 00:14:11.752 15:01:27 -- setup/common.sh@33 -- # return 0 00:14:11.752 15:01:27 -- setup/hugepages.sh@99 -- # surp=0 00:14:11.752 15:01:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:11.752 15:01:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:11.752 15:01:27 -- setup/common.sh@18 -- # local node= 00:14:11.752 15:01:27 -- setup/common.sh@19 -- # local var val 00:14:11.752 15:01:27 -- setup/common.sh@20 -- # local mem_f mem 00:14:11.752 15:01:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:11.752 15:01:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:11.752 15:01:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:11.752 15:01:27 -- setup/common.sh@28 -- # mapfile -t mem 00:14:11.752 15:01:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7586816 kB' 'MemAvailable: 9512692 kB' 'Buffers: 2436 kB' 'Cached: 2135792 kB' 'SwapCached: 0 kB' 'Active: 893256 kB' 'Inactive: 1368560 kB' 'Active(anon): 134052 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368560 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1316 kB' 'Writeback: 0 kB' 'AnonPages: 125476 kB' 'Mapped: 48864 kB' 'Shmem: 10464 kB' 'KReclaimable: 70100 kB' 'Slab: 147492 kB' 'SReclaimable: 70100 kB' 'SUnreclaim: 77392 kB' 'KernelStack: 6416 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.752 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.752 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:11.753 15:01:27 -- setup/common.sh@33 -- # echo 0 00:14:11.753 15:01:27 -- setup/common.sh@33 -- # return 0 00:14:11.753 15:01:27 -- setup/hugepages.sh@100 -- # resv=0 00:14:11.753 15:01:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:14:11.753 nr_hugepages=1025 00:14:11.753 resv_hugepages=0 00:14:11.753 15:01:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:11.753 surplus_hugepages=0 00:14:11.753 15:01:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:11.753 anon_hugepages=0 00:14:11.753 15:01:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:11.753 15:01:27 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:14:11.753 15:01:27 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:14:11.753 15:01:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:11.753 15:01:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:11.753 15:01:27 -- setup/common.sh@18 -- # local node= 00:14:11.753 15:01:27 -- setup/common.sh@19 -- # local var val 00:14:11.753 15:01:27 -- setup/common.sh@20 -- # local mem_f mem 00:14:11.753 15:01:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:11.753 15:01:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:11.753 15:01:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:11.753 15:01:27 -- setup/common.sh@28 -- # mapfile -t mem 00:14:11.753 15:01:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7587068 kB' 'MemAvailable: 9512960 kB' 'Buffers: 2436 kB' 'Cached: 2135808 kB' 'SwapCached: 0 kB' 'Active: 893356 kB' 'Inactive: 1368576 kB' 'Active(anon): 134152 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368576 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1332 kB' 'Writeback: 0 kB' 'AnonPages: 125340 kB' 'Mapped: 48864 kB' 'Shmem: 10464 kB' 'KReclaimable: 70100 kB' 'Slab: 147488 kB' 'SReclaimable: 70100 kB' 'SUnreclaim: 77388 kB' 'KernelStack: 6416 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 359712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.753 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.753 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.754 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.754 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:11.755 15:01:27 -- setup/common.sh@33 -- # echo 1025 00:14:11.755 15:01:27 -- setup/common.sh@33 -- # return 0 00:14:11.755 15:01:27 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:14:11.755 15:01:27 -- setup/hugepages.sh@112 -- # get_nodes 00:14:11.755 15:01:27 -- setup/hugepages.sh@27 -- # local node 00:14:11.755 15:01:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:11.755 15:01:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:14:11.755 15:01:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:11.755 15:01:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:11.755 15:01:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:11.755 15:01:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:11.755 15:01:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:11.755 15:01:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:11.755 15:01:27 -- setup/common.sh@18 -- # local node=0 00:14:11.755 15:01:27 -- setup/common.sh@19 -- # local var val 00:14:11.755 15:01:27 -- setup/common.sh@20 -- # local mem_f mem 00:14:11.755 15:01:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:11.755 15:01:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:11.755 15:01:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:11.755 15:01:27 -- setup/common.sh@28 -- # mapfile -t mem 00:14:11.755 15:01:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:11.755 15:01:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7587068 kB' 'MemUsed: 4654912 kB' 'SwapCached: 0 kB' 'Active: 893384 kB' 'Inactive: 1368576 kB' 'Active(anon): 134180 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368576 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1332 kB' 'Writeback: 0 kB' 'FilePages: 2138244 kB' 'Mapped: 48864 kB' 'AnonPages: 125344 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70100 kB' 'Slab: 147484 kB' 'SReclaimable: 70100 kB' 'SUnreclaim: 77384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.755 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.755 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # continue 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # IFS=': ' 00:14:11.756 15:01:27 -- setup/common.sh@31 -- # read -r var val _ 00:14:11.756 15:01:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:11.756 15:01:27 -- setup/common.sh@33 -- # echo 0 00:14:11.756 15:01:27 -- setup/common.sh@33 -- # return 0 00:14:11.756 15:01:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:11.756 15:01:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:11.756 15:01:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:11.756 15:01:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:11.756 15:01:27 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:14:11.756 node0=1025 expecting 1025 00:14:11.756 15:01:27 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:14:11.756 00:14:11.756 real 0m0.699s 00:14:11.756 user 0m0.302s 00:14:11.756 sys 0m0.443s 00:14:11.756 15:01:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:11.756 15:01:27 -- common/autotest_common.sh@10 -- # set +x 00:14:11.756 ************************************ 00:14:11.756 END TEST odd_alloc 00:14:11.756 ************************************ 00:14:12.016 15:01:27 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:14:12.016 15:01:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:12.016 15:01:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:12.016 15:01:27 -- common/autotest_common.sh@10 -- # set +x 00:14:12.016 ************************************ 00:14:12.016 START TEST custom_alloc 00:14:12.016 ************************************ 00:14:12.016 15:01:27 -- common/autotest_common.sh@1111 -- # custom_alloc 00:14:12.016 15:01:27 -- setup/hugepages.sh@167 -- # local IFS=, 00:14:12.016 15:01:27 -- setup/hugepages.sh@169 -- # local node 00:14:12.016 15:01:27 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:14:12.016 15:01:27 -- setup/hugepages.sh@170 -- # local nodes_hp 00:14:12.016 15:01:27 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:14:12.016 15:01:27 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:14:12.016 15:01:27 -- setup/hugepages.sh@49 -- # local size=1048576 00:14:12.016 15:01:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:12.016 15:01:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:12.016 15:01:27 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:14:12.016 15:01:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:12.016 15:01:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:12.016 15:01:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:12.016 15:01:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:12.016 15:01:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:12.016 15:01:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:12.016 15:01:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:12.016 15:01:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:12.016 15:01:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:12.016 15:01:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:12.016 15:01:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:14:12.016 15:01:27 -- setup/hugepages.sh@83 -- # : 0 00:14:12.017 15:01:27 -- setup/hugepages.sh@84 -- # : 0 00:14:12.017 15:01:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:12.017 15:01:27 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:14:12.017 15:01:27 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:14:12.017 15:01:27 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:14:12.017 15:01:27 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:14:12.017 15:01:27 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:14:12.017 15:01:27 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:14:12.017 15:01:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:12.017 15:01:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:12.017 15:01:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:12.017 15:01:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:12.017 15:01:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:12.017 15:01:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:12.017 15:01:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:12.017 15:01:27 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:14:12.017 15:01:27 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:14:12.017 15:01:27 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:14:12.017 15:01:27 -- setup/hugepages.sh@78 -- # return 0 00:14:12.017 15:01:27 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:14:12.017 15:01:27 -- setup/hugepages.sh@187 -- # setup output 00:14:12.017 15:01:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:12.017 15:01:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:12.591 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:12.591 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:12.591 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:12.591 15:01:28 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:14:12.591 15:01:28 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:14:12.591 15:01:28 -- setup/hugepages.sh@89 -- # local node 00:14:12.591 15:01:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:12.591 15:01:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:12.591 15:01:28 -- setup/hugepages.sh@92 -- # local surp 00:14:12.591 15:01:28 -- setup/hugepages.sh@93 -- # local resv 00:14:12.591 15:01:28 -- setup/hugepages.sh@94 -- # local anon 00:14:12.591 15:01:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:12.591 15:01:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:12.591 15:01:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:12.591 15:01:28 -- setup/common.sh@18 -- # local node= 00:14:12.591 15:01:28 -- setup/common.sh@19 -- # local var val 00:14:12.591 15:01:28 -- setup/common.sh@20 -- # local mem_f mem 00:14:12.591 15:01:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:12.591 15:01:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:12.591 15:01:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:12.591 15:01:28 -- setup/common.sh@28 -- # mapfile -t mem 00:14:12.591 15:01:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.591 15:01:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8641848 kB' 'MemAvailable: 10567724 kB' 'Buffers: 2436 kB' 'Cached: 2135808 kB' 'SwapCached: 0 kB' 'Active: 887508 kB' 'Inactive: 1368576 kB' 'Active(anon): 128304 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368576 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1472 kB' 'Writeback: 0 kB' 'AnonPages: 119428 kB' 'Mapped: 48276 kB' 'Shmem: 10464 kB' 'KReclaimable: 70072 kB' 'Slab: 147308 kB' 'SReclaimable: 70072 kB' 'SUnreclaim: 77236 kB' 'KernelStack: 6228 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 339768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.591 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.591 15:01:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.592 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.592 15:01:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:12.592 15:01:28 -- setup/common.sh@33 -- # echo 0 00:14:12.592 15:01:28 -- setup/common.sh@33 -- # return 0 00:14:12.592 15:01:28 -- setup/hugepages.sh@97 -- # anon=0 00:14:12.592 15:01:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:12.592 15:01:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:12.592 15:01:28 -- setup/common.sh@18 -- # local node= 00:14:12.592 15:01:28 -- setup/common.sh@19 -- # local var val 00:14:12.592 15:01:28 -- setup/common.sh@20 -- # local mem_f mem 00:14:12.592 15:01:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:12.592 15:01:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:12.592 15:01:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:12.592 15:01:28 -- setup/common.sh@28 -- # mapfile -t mem 00:14:12.593 15:01:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8641848 kB' 'MemAvailable: 10567724 kB' 'Buffers: 2436 kB' 'Cached: 2135808 kB' 'SwapCached: 0 kB' 'Active: 887336 kB' 'Inactive: 1368576 kB' 'Active(anon): 128132 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368576 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1472 kB' 'Writeback: 0 kB' 'AnonPages: 119516 kB' 'Mapped: 48136 kB' 'Shmem: 10464 kB' 'KReclaimable: 70072 kB' 'Slab: 147296 kB' 'SReclaimable: 70072 kB' 'SUnreclaim: 77224 kB' 'KernelStack: 6256 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 339768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.593 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.593 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.594 15:01:28 -- setup/common.sh@33 -- # echo 0 00:14:12.594 15:01:28 -- setup/common.sh@33 -- # return 0 00:14:12.594 15:01:28 -- setup/hugepages.sh@99 -- # surp=0 00:14:12.594 15:01:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:12.594 15:01:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:12.594 15:01:28 -- setup/common.sh@18 -- # local node= 00:14:12.594 15:01:28 -- setup/common.sh@19 -- # local var val 00:14:12.594 15:01:28 -- setup/common.sh@20 -- # local mem_f mem 00:14:12.594 15:01:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:12.594 15:01:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:12.594 15:01:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:12.594 15:01:28 -- setup/common.sh@28 -- # mapfile -t mem 00:14:12.594 15:01:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8641848 kB' 'MemAvailable: 10567724 kB' 'Buffers: 2436 kB' 'Cached: 2135808 kB' 'SwapCached: 0 kB' 'Active: 887388 kB' 'Inactive: 1368576 kB' 'Active(anon): 128184 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368576 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1472 kB' 'Writeback: 0 kB' 'AnonPages: 119340 kB' 'Mapped: 48136 kB' 'Shmem: 10464 kB' 'KReclaimable: 70072 kB' 'Slab: 147296 kB' 'SReclaimable: 70072 kB' 'SUnreclaim: 77224 kB' 'KernelStack: 6272 kB' 'PageTables: 3916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 339768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.594 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.594 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.595 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.595 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:12.596 15:01:28 -- setup/common.sh@33 -- # echo 0 00:14:12.596 15:01:28 -- setup/common.sh@33 -- # return 0 00:14:12.596 15:01:28 -- setup/hugepages.sh@100 -- # resv=0 00:14:12.596 15:01:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:14:12.596 nr_hugepages=512 00:14:12.596 resv_hugepages=0 00:14:12.596 15:01:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:12.596 surplus_hugepages=0 00:14:12.596 15:01:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:12.596 anon_hugepages=0 00:14:12.596 15:01:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:12.596 15:01:28 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:12.596 15:01:28 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:14:12.596 15:01:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:12.596 15:01:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:12.596 15:01:28 -- setup/common.sh@18 -- # local node= 00:14:12.596 15:01:28 -- setup/common.sh@19 -- # local var val 00:14:12.596 15:01:28 -- setup/common.sh@20 -- # local mem_f mem 00:14:12.596 15:01:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:12.596 15:01:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:12.596 15:01:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:12.596 15:01:28 -- setup/common.sh@28 -- # mapfile -t mem 00:14:12.596 15:01:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8641848 kB' 'MemAvailable: 10567724 kB' 'Buffers: 2436 kB' 'Cached: 2135808 kB' 'SwapCached: 0 kB' 'Active: 887680 kB' 'Inactive: 1368576 kB' 'Active(anon): 128476 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368576 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1472 kB' 'Writeback: 0 kB' 'AnonPages: 119624 kB' 'Mapped: 48136 kB' 'Shmem: 10464 kB' 'KReclaimable: 70072 kB' 'Slab: 147296 kB' 'SReclaimable: 70072 kB' 'SUnreclaim: 77224 kB' 'KernelStack: 6288 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 339768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.596 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.596 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.597 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.597 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:12.598 15:01:28 -- setup/common.sh@33 -- # echo 512 00:14:12.598 15:01:28 -- setup/common.sh@33 -- # return 0 00:14:12.598 15:01:28 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:12.598 15:01:28 -- setup/hugepages.sh@112 -- # get_nodes 00:14:12.598 15:01:28 -- setup/hugepages.sh@27 -- # local node 00:14:12.598 15:01:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:12.598 15:01:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:14:12.598 15:01:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:12.598 15:01:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:12.598 15:01:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:12.598 15:01:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:12.598 15:01:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:12.598 15:01:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:12.598 15:01:28 -- setup/common.sh@18 -- # local node=0 00:14:12.598 15:01:28 -- setup/common.sh@19 -- # local var val 00:14:12.598 15:01:28 -- setup/common.sh@20 -- # local mem_f mem 00:14:12.598 15:01:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:12.598 15:01:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:12.598 15:01:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:12.598 15:01:28 -- setup/common.sh@28 -- # mapfile -t mem 00:14:12.598 15:01:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8641848 kB' 'MemUsed: 3600132 kB' 'SwapCached: 0 kB' 'Active: 887552 kB' 'Inactive: 1368576 kB' 'Active(anon): 128348 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368576 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1472 kB' 'Writeback: 0 kB' 'FilePages: 2138244 kB' 'Mapped: 48136 kB' 'AnonPages: 119496 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70072 kB' 'Slab: 147296 kB' 'SReclaimable: 70072 kB' 'SUnreclaim: 77224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.598 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.598 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # continue 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:12.599 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:12.599 15:01:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:12.599 15:01:28 -- setup/common.sh@33 -- # echo 0 00:14:12.599 15:01:28 -- setup/common.sh@33 -- # return 0 00:14:12.599 15:01:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:12.599 15:01:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:12.599 15:01:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:12.599 15:01:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:12.599 15:01:28 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:14:12.599 node0=512 expecting 512 00:14:12.599 15:01:28 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:14:12.599 00:14:12.599 real 0m0.692s 00:14:12.599 user 0m0.306s 00:14:12.599 sys 0m0.436s 00:14:12.599 15:01:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:12.599 15:01:28 -- common/autotest_common.sh@10 -- # set +x 00:14:12.599 ************************************ 00:14:12.599 END TEST custom_alloc 00:14:12.599 ************************************ 00:14:12.859 15:01:28 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:14:12.859 15:01:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:12.859 15:01:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:12.859 15:01:28 -- common/autotest_common.sh@10 -- # set +x 00:14:12.859 ************************************ 00:14:12.859 START TEST no_shrink_alloc 00:14:12.859 ************************************ 00:14:12.859 15:01:28 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:14:12.859 15:01:28 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:14:12.859 15:01:28 -- setup/hugepages.sh@49 -- # local size=2097152 00:14:12.859 15:01:28 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:12.859 15:01:28 -- setup/hugepages.sh@51 -- # shift 00:14:12.859 15:01:28 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:12.859 15:01:28 -- setup/hugepages.sh@52 -- # local node_ids 00:14:12.859 15:01:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:12.859 15:01:28 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:12.859 15:01:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:12.859 15:01:28 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:12.859 15:01:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:12.859 15:01:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:12.859 15:01:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:12.859 15:01:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:12.859 15:01:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:12.859 15:01:28 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:12.859 15:01:28 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:12.859 15:01:28 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:14:12.859 15:01:28 -- setup/hugepages.sh@73 -- # return 0 00:14:12.859 15:01:28 -- setup/hugepages.sh@198 -- # setup output 00:14:12.859 15:01:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:12.859 15:01:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:13.432 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:13.432 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:13.432 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:13.432 15:01:28 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:14:13.432 15:01:28 -- setup/hugepages.sh@89 -- # local node 00:14:13.432 15:01:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:13.432 15:01:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:13.432 15:01:28 -- setup/hugepages.sh@92 -- # local surp 00:14:13.432 15:01:28 -- setup/hugepages.sh@93 -- # local resv 00:14:13.432 15:01:28 -- setup/hugepages.sh@94 -- # local anon 00:14:13.432 15:01:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:13.432 15:01:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:13.432 15:01:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:13.432 15:01:28 -- setup/common.sh@18 -- # local node= 00:14:13.432 15:01:28 -- setup/common.sh@19 -- # local var val 00:14:13.432 15:01:28 -- setup/common.sh@20 -- # local mem_f mem 00:14:13.432 15:01:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:13.432 15:01:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:13.432 15:01:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:13.432 15:01:28 -- setup/common.sh@28 -- # mapfile -t mem 00:14:13.432 15:01:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:13.432 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.432 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.432 15:01:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7592736 kB' 'MemAvailable: 9518616 kB' 'Buffers: 2436 kB' 'Cached: 2135812 kB' 'SwapCached: 0 kB' 'Active: 887808 kB' 'Inactive: 1368580 kB' 'Active(anon): 128604 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368580 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 119732 kB' 'Mapped: 48264 kB' 'Shmem: 10464 kB' 'KReclaimable: 70072 kB' 'Slab: 147276 kB' 'SReclaimable: 70072 kB' 'SUnreclaim: 77204 kB' 'KernelStack: 6304 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:28 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:28 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.433 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.433 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:13.434 15:01:29 -- setup/common.sh@33 -- # echo 0 00:14:13.434 15:01:29 -- setup/common.sh@33 -- # return 0 00:14:13.434 15:01:29 -- setup/hugepages.sh@97 -- # anon=0 00:14:13.434 15:01:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:13.434 15:01:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:13.434 15:01:29 -- setup/common.sh@18 -- # local node= 00:14:13.434 15:01:29 -- setup/common.sh@19 -- # local var val 00:14:13.434 15:01:29 -- setup/common.sh@20 -- # local mem_f mem 00:14:13.434 15:01:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:13.434 15:01:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:13.434 15:01:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:13.434 15:01:29 -- setup/common.sh@28 -- # mapfile -t mem 00:14:13.434 15:01:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7593612 kB' 'MemAvailable: 9519492 kB' 'Buffers: 2436 kB' 'Cached: 2135812 kB' 'SwapCached: 0 kB' 'Active: 887532 kB' 'Inactive: 1368580 kB' 'Active(anon): 128328 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368580 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 119484 kB' 'Mapped: 48144 kB' 'Shmem: 10464 kB' 'KReclaimable: 70072 kB' 'Slab: 147276 kB' 'SReclaimable: 70072 kB' 'SUnreclaim: 77204 kB' 'KernelStack: 6272 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.434 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.434 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.435 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.435 15:01:29 -- setup/common.sh@33 -- # echo 0 00:14:13.435 15:01:29 -- setup/common.sh@33 -- # return 0 00:14:13.435 15:01:29 -- setup/hugepages.sh@99 -- # surp=0 00:14:13.435 15:01:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:13.435 15:01:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:13.435 15:01:29 -- setup/common.sh@18 -- # local node= 00:14:13.435 15:01:29 -- setup/common.sh@19 -- # local var val 00:14:13.435 15:01:29 -- setup/common.sh@20 -- # local mem_f mem 00:14:13.435 15:01:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:13.435 15:01:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:13.435 15:01:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:13.435 15:01:29 -- setup/common.sh@28 -- # mapfile -t mem 00:14:13.435 15:01:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.435 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7594552 kB' 'MemAvailable: 9520432 kB' 'Buffers: 2436 kB' 'Cached: 2135812 kB' 'SwapCached: 0 kB' 'Active: 887480 kB' 'Inactive: 1368580 kB' 'Active(anon): 128276 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368580 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 119772 kB' 'Mapped: 48144 kB' 'Shmem: 10464 kB' 'KReclaimable: 70072 kB' 'Slab: 147276 kB' 'SReclaimable: 70072 kB' 'SUnreclaim: 77204 kB' 'KernelStack: 6304 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 342244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.436 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.436 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:13.437 15:01:29 -- setup/common.sh@33 -- # echo 0 00:14:13.437 15:01:29 -- setup/common.sh@33 -- # return 0 00:14:13.437 nr_hugepages=1024 00:14:13.437 resv_hugepages=0 00:14:13.437 15:01:29 -- setup/hugepages.sh@100 -- # resv=0 00:14:13.437 15:01:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:13.437 15:01:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:13.437 surplus_hugepages=0 00:14:13.437 15:01:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:13.437 15:01:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:13.437 anon_hugepages=0 00:14:13.437 15:01:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:13.437 15:01:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:13.437 15:01:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:13.437 15:01:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:13.437 15:01:29 -- setup/common.sh@18 -- # local node= 00:14:13.437 15:01:29 -- setup/common.sh@19 -- # local var val 00:14:13.437 15:01:29 -- setup/common.sh@20 -- # local mem_f mem 00:14:13.437 15:01:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:13.437 15:01:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:13.437 15:01:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:13.437 15:01:29 -- setup/common.sh@28 -- # mapfile -t mem 00:14:13.437 15:01:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7594552 kB' 'MemAvailable: 9520436 kB' 'Buffers: 2436 kB' 'Cached: 2135816 kB' 'SwapCached: 0 kB' 'Active: 887308 kB' 'Inactive: 1368584 kB' 'Active(anon): 128104 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 119540 kB' 'Mapped: 48144 kB' 'Shmem: 10464 kB' 'KReclaimable: 70072 kB' 'Slab: 147276 kB' 'SReclaimable: 70072 kB' 'SUnreclaim: 77204 kB' 'KernelStack: 6256 kB' 'PageTables: 3868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.437 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.437 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.438 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.438 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.439 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.439 15:01:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.439 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.439 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.439 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.439 15:01:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.439 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.439 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.439 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.439 15:01:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.439 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.439 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.439 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.439 15:01:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.439 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.439 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.439 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.439 15:01:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.439 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.439 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.439 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.699 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.699 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:13.699 15:01:29 -- setup/common.sh@33 -- # echo 1024 00:14:13.699 15:01:29 -- setup/common.sh@33 -- # return 0 00:14:13.699 15:01:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:13.699 15:01:29 -- setup/hugepages.sh@112 -- # get_nodes 00:14:13.699 15:01:29 -- setup/hugepages.sh@27 -- # local node 00:14:13.699 15:01:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:13.699 15:01:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:13.699 15:01:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:13.700 15:01:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:13.700 15:01:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:13.700 15:01:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:13.700 15:01:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:13.700 15:01:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:13.700 15:01:29 -- setup/common.sh@18 -- # local node=0 00:14:13.700 15:01:29 -- setup/common.sh@19 -- # local var val 00:14:13.700 15:01:29 -- setup/common.sh@20 -- # local mem_f mem 00:14:13.700 15:01:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:13.700 15:01:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:13.700 15:01:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:13.700 15:01:29 -- setup/common.sh@28 -- # mapfile -t mem 00:14:13.700 15:01:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7594552 kB' 'MemUsed: 4647428 kB' 'SwapCached: 0 kB' 'Active: 887532 kB' 'Inactive: 1368584 kB' 'Active(anon): 128328 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'FilePages: 2138252 kB' 'Mapped: 48144 kB' 'AnonPages: 119480 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70072 kB' 'Slab: 147276 kB' 'SReclaimable: 70072 kB' 'SUnreclaim: 77204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.700 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.700 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.701 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.701 15:01:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.701 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.701 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.701 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.701 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.701 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.701 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.701 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.701 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.701 15:01:29 -- setup/common.sh@32 -- # continue 00:14:13.701 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:13.701 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:13.701 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:13.701 15:01:29 -- setup/common.sh@33 -- # echo 0 00:14:13.701 15:01:29 -- setup/common.sh@33 -- # return 0 00:14:13.701 node0=1024 expecting 1024 00:14:13.701 15:01:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:13.701 15:01:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:13.701 15:01:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:13.701 15:01:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:13.701 15:01:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:13.701 15:01:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:13.701 15:01:29 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:14:13.701 15:01:29 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:14:13.701 15:01:29 -- setup/hugepages.sh@202 -- # setup output 00:14:13.701 15:01:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:13.701 15:01:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:14.273 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:14.273 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:14.273 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:14.273 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:14:14.273 15:01:29 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:14:14.273 15:01:29 -- setup/hugepages.sh@89 -- # local node 00:14:14.273 15:01:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:14.273 15:01:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:14.273 15:01:29 -- setup/hugepages.sh@92 -- # local surp 00:14:14.273 15:01:29 -- setup/hugepages.sh@93 -- # local resv 00:14:14.273 15:01:29 -- setup/hugepages.sh@94 -- # local anon 00:14:14.273 15:01:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:14.273 15:01:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:14.273 15:01:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:14.273 15:01:29 -- setup/common.sh@18 -- # local node= 00:14:14.273 15:01:29 -- setup/common.sh@19 -- # local var val 00:14:14.273 15:01:29 -- setup/common.sh@20 -- # local mem_f mem 00:14:14.273 15:01:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:14.273 15:01:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:14.273 15:01:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:14.273 15:01:29 -- setup/common.sh@28 -- # mapfile -t mem 00:14:14.273 15:01:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:14.273 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.273 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.273 15:01:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7592748 kB' 'MemAvailable: 9518632 kB' 'Buffers: 2436 kB' 'Cached: 2135816 kB' 'SwapCached: 0 kB' 'Active: 887804 kB' 'Inactive: 1368584 kB' 'Active(anon): 128600 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 119764 kB' 'Mapped: 48272 kB' 'Shmem: 10464 kB' 'KReclaimable: 70072 kB' 'Slab: 147292 kB' 'SReclaimable: 70072 kB' 'SUnreclaim: 77220 kB' 'KernelStack: 6244 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:14.273 15:01:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.273 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.273 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.273 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.273 15:01:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.273 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.273 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.273 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.273 15:01:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.273 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.273 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.273 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.273 15:01:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.273 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.273 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.273 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.273 15:01:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.273 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.273 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.273 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.273 15:01:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.274 15:01:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:14.274 15:01:29 -- setup/common.sh@33 -- # echo 0 00:14:14.274 15:01:29 -- setup/common.sh@33 -- # return 0 00:14:14.274 15:01:29 -- setup/hugepages.sh@97 -- # anon=0 00:14:14.274 15:01:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:14.274 15:01:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:14.274 15:01:29 -- setup/common.sh@18 -- # local node= 00:14:14.274 15:01:29 -- setup/common.sh@19 -- # local var val 00:14:14.274 15:01:29 -- setup/common.sh@20 -- # local mem_f mem 00:14:14.274 15:01:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:14.274 15:01:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:14.274 15:01:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:14.274 15:01:29 -- setup/common.sh@28 -- # mapfile -t mem 00:14:14.274 15:01:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.274 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7592756 kB' 'MemAvailable: 9518640 kB' 'Buffers: 2436 kB' 'Cached: 2135816 kB' 'SwapCached: 0 kB' 'Active: 887732 kB' 'Inactive: 1368584 kB' 'Active(anon): 128528 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 119872 kB' 'Mapped: 48144 kB' 'Shmem: 10464 kB' 'KReclaimable: 70072 kB' 'Slab: 147292 kB' 'SReclaimable: 70072 kB' 'SUnreclaim: 77220 kB' 'KernelStack: 6272 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.275 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.275 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.276 15:01:29 -- setup/common.sh@33 -- # echo 0 00:14:14.276 15:01:29 -- setup/common.sh@33 -- # return 0 00:14:14.276 15:01:29 -- setup/hugepages.sh@99 -- # surp=0 00:14:14.276 15:01:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:14.276 15:01:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:14.276 15:01:29 -- setup/common.sh@18 -- # local node= 00:14:14.276 15:01:29 -- setup/common.sh@19 -- # local var val 00:14:14.276 15:01:29 -- setup/common.sh@20 -- # local mem_f mem 00:14:14.276 15:01:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:14.276 15:01:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:14.276 15:01:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:14.276 15:01:29 -- setup/common.sh@28 -- # mapfile -t mem 00:14:14.276 15:01:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7592784 kB' 'MemAvailable: 9518668 kB' 'Buffers: 2436 kB' 'Cached: 2135816 kB' 'SwapCached: 0 kB' 'Active: 887384 kB' 'Inactive: 1368584 kB' 'Active(anon): 128180 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 119564 kB' 'Mapped: 48144 kB' 'Shmem: 10464 kB' 'KReclaimable: 70072 kB' 'Slab: 147292 kB' 'SReclaimable: 70072 kB' 'SUnreclaim: 77220 kB' 'KernelStack: 6272 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.276 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.276 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.277 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.277 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:14.277 15:01:29 -- setup/common.sh@33 -- # echo 0 00:14:14.277 15:01:29 -- setup/common.sh@33 -- # return 0 00:14:14.277 15:01:29 -- setup/hugepages.sh@100 -- # resv=0 00:14:14.277 nr_hugepages=1024 00:14:14.277 15:01:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:14.277 resv_hugepages=0 00:14:14.277 15:01:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:14.277 surplus_hugepages=0 00:14:14.277 15:01:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:14.277 anon_hugepages=0 00:14:14.277 15:01:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:14.277 15:01:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:14.277 15:01:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:14.277 15:01:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:14.277 15:01:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:14.277 15:01:29 -- setup/common.sh@18 -- # local node= 00:14:14.277 15:01:29 -- setup/common.sh@19 -- # local var val 00:14:14.277 15:01:29 -- setup/common.sh@20 -- # local mem_f mem 00:14:14.277 15:01:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:14.277 15:01:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:14.277 15:01:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:14.277 15:01:29 -- setup/common.sh@28 -- # mapfile -t mem 00:14:14.277 15:01:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:14.278 15:01:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7592784 kB' 'MemAvailable: 9518668 kB' 'Buffers: 2436 kB' 'Cached: 2135816 kB' 'SwapCached: 0 kB' 'Active: 887384 kB' 'Inactive: 1368584 kB' 'Active(anon): 128180 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 119564 kB' 'Mapped: 48144 kB' 'Shmem: 10464 kB' 'KReclaimable: 70072 kB' 'Slab: 147292 kB' 'SReclaimable: 70072 kB' 'SUnreclaim: 77220 kB' 'KernelStack: 6272 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 6127616 kB' 'DirectMap1G: 8388608 kB' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.278 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.278 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:14.279 15:01:29 -- setup/common.sh@33 -- # echo 1024 00:14:14.279 15:01:29 -- setup/common.sh@33 -- # return 0 00:14:14.279 15:01:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:14.279 15:01:29 -- setup/hugepages.sh@112 -- # get_nodes 00:14:14.279 15:01:29 -- setup/hugepages.sh@27 -- # local node 00:14:14.279 15:01:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:14.279 15:01:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:14.279 15:01:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:14.279 15:01:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:14.279 15:01:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:14.279 15:01:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:14.279 15:01:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:14.279 15:01:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:14.279 15:01:29 -- setup/common.sh@18 -- # local node=0 00:14:14.279 15:01:29 -- setup/common.sh@19 -- # local var val 00:14:14.279 15:01:29 -- setup/common.sh@20 -- # local mem_f mem 00:14:14.279 15:01:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:14.279 15:01:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:14.279 15:01:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:14.279 15:01:29 -- setup/common.sh@28 -- # mapfile -t mem 00:14:14.279 15:01:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7592784 kB' 'MemUsed: 4649196 kB' 'SwapCached: 0 kB' 'Active: 887564 kB' 'Inactive: 1368584 kB' 'Active(anon): 128360 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1368584 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'FilePages: 2138252 kB' 'Mapped: 48144 kB' 'AnonPages: 119724 kB' 'Shmem: 10464 kB' 'KernelStack: 6256 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70072 kB' 'Slab: 147292 kB' 'SReclaimable: 70072 kB' 'SUnreclaim: 77220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.279 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.279 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # continue 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # IFS=': ' 00:14:14.280 15:01:29 -- setup/common.sh@31 -- # read -r var val _ 00:14:14.280 15:01:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:14.280 15:01:29 -- setup/common.sh@33 -- # echo 0 00:14:14.280 15:01:29 -- setup/common.sh@33 -- # return 0 00:14:14.280 15:01:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:14.280 15:01:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:14.280 15:01:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:14.280 15:01:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:14.280 15:01:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:14.280 node0=1024 expecting 1024 00:14:14.280 15:01:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:14.280 00:14:14.280 real 0m1.513s 00:14:14.280 user 0m0.689s 00:14:14.280 sys 0m0.882s 00:14:14.280 15:01:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:14.280 15:01:29 -- common/autotest_common.sh@10 -- # set +x 00:14:14.280 ************************************ 00:14:14.280 END TEST no_shrink_alloc 00:14:14.280 ************************************ 00:14:14.540 15:01:29 -- setup/hugepages.sh@217 -- # clear_hp 00:14:14.540 15:01:29 -- setup/hugepages.sh@37 -- # local node hp 00:14:14.540 15:01:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:14:14.540 15:01:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:14.540 15:01:29 -- setup/hugepages.sh@41 -- # echo 0 00:14:14.540 15:01:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:14.540 15:01:29 -- setup/hugepages.sh@41 -- # echo 0 00:14:14.540 15:01:29 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:14:14.540 15:01:29 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:14:14.540 00:14:14.540 real 0m6.629s 00:14:14.540 user 0m2.852s 00:14:14.540 sys 0m3.856s 00:14:14.540 15:01:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:14.540 15:01:29 -- common/autotest_common.sh@10 -- # set +x 00:14:14.540 ************************************ 00:14:14.540 END TEST hugepages 00:14:14.540 ************************************ 00:14:14.540 15:01:30 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:14:14.540 15:01:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:14.540 15:01:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:14.540 15:01:30 -- common/autotest_common.sh@10 -- # set +x 00:14:14.540 ************************************ 00:14:14.540 START TEST driver 00:14:14.540 ************************************ 00:14:14.540 15:01:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:14:14.799 * Looking for test storage... 00:14:14.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:14.799 15:01:30 -- setup/driver.sh@68 -- # setup reset 00:14:14.799 15:01:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:14.799 15:01:30 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:15.736 15:01:31 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:14:15.736 15:01:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:15.736 15:01:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:15.736 15:01:31 -- common/autotest_common.sh@10 -- # set +x 00:14:15.736 ************************************ 00:14:15.736 START TEST guess_driver 00:14:15.736 ************************************ 00:14:15.736 15:01:31 -- common/autotest_common.sh@1111 -- # guess_driver 00:14:15.736 15:01:31 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:14:15.736 15:01:31 -- setup/driver.sh@47 -- # local fail=0 00:14:15.736 15:01:31 -- setup/driver.sh@49 -- # pick_driver 00:14:15.736 15:01:31 -- setup/driver.sh@36 -- # vfio 00:14:15.736 15:01:31 -- setup/driver.sh@21 -- # local iommu_grups 00:14:15.736 15:01:31 -- setup/driver.sh@22 -- # local unsafe_vfio 00:14:15.736 15:01:31 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:14:15.736 15:01:31 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:14:15.736 15:01:31 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:14:15.736 15:01:31 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:14:15.736 15:01:31 -- setup/driver.sh@32 -- # return 1 00:14:15.736 15:01:31 -- setup/driver.sh@38 -- # uio 00:14:15.736 15:01:31 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:14:15.736 15:01:31 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:14:15.736 15:01:31 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:14:15.736 15:01:31 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:14:15.736 15:01:31 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:14:15.736 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:14:15.736 15:01:31 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:14:15.736 15:01:31 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:14:15.736 15:01:31 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:14:15.736 Looking for driver=uio_pci_generic 00:14:15.736 15:01:31 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:14:15.736 15:01:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:15.736 15:01:31 -- setup/driver.sh@45 -- # setup output config 00:14:15.736 15:01:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:15.736 15:01:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:16.675 15:01:32 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:14:16.675 15:01:32 -- setup/driver.sh@58 -- # continue 00:14:16.675 15:01:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:16.676 15:01:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:14:16.676 15:01:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:14:16.676 15:01:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:16.676 15:01:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:14:16.676 15:01:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:14:16.676 15:01:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:16.676 15:01:32 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:14:16.676 15:01:32 -- setup/driver.sh@65 -- # setup reset 00:14:16.676 15:01:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:16.676 15:01:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:17.623 00:14:17.623 real 0m1.825s 00:14:17.623 user 0m0.629s 00:14:17.623 sys 0m1.273s 00:14:17.623 15:01:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:17.623 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:14:17.623 ************************************ 00:14:17.623 END TEST guess_driver 00:14:17.624 ************************************ 00:14:17.624 00:14:17.624 real 0m2.920s 00:14:17.624 user 0m1.018s 00:14:17.624 sys 0m2.061s 00:14:17.624 15:01:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:17.624 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:14:17.624 ************************************ 00:14:17.624 END TEST driver 00:14:17.624 ************************************ 00:14:17.624 15:01:33 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:14:17.624 15:01:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:17.624 15:01:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:17.624 15:01:33 -- common/autotest_common.sh@10 -- # set +x 00:14:17.624 ************************************ 00:14:17.624 START TEST devices 00:14:17.624 ************************************ 00:14:17.624 15:01:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:14:17.883 * Looking for test storage... 00:14:17.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:17.883 15:01:33 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:14:17.883 15:01:33 -- setup/devices.sh@192 -- # setup reset 00:14:17.883 15:01:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:17.883 15:01:33 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:18.451 15:01:34 -- setup/devices.sh@194 -- # get_zoned_devs 00:14:18.451 15:01:34 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:18.451 15:01:34 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:18.451 15:01:34 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:18.451 15:01:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:18.451 15:01:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:18.451 15:01:34 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:18.451 15:01:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:18.451 15:01:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:18.451 15:01:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:18.451 15:01:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:14:18.451 15:01:34 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:14:18.451 15:01:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:14:18.451 15:01:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:18.451 15:01:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:18.451 15:01:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:14:18.451 15:01:34 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:14:18.451 15:01:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:14:18.451 15:01:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:18.451 15:01:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:18.451 15:01:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:18.451 15:01:34 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:18.451 15:01:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:18.451 15:01:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:18.451 15:01:34 -- setup/devices.sh@196 -- # blocks=() 00:14:18.451 15:01:34 -- setup/devices.sh@196 -- # declare -a blocks 00:14:18.451 15:01:34 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:14:18.451 15:01:34 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:14:18.451 15:01:34 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:14:18.451 15:01:34 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:18.451 15:01:34 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:14:18.451 15:01:34 -- setup/devices.sh@201 -- # ctrl=nvme0 00:14:18.451 15:01:34 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:14:18.451 15:01:34 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:18.451 15:01:34 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:14:18.451 15:01:34 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:14:18.451 15:01:34 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:14:18.451 No valid GPT data, bailing 00:14:18.451 15:01:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:18.451 15:01:34 -- scripts/common.sh@391 -- # pt= 00:14:18.451 15:01:34 -- scripts/common.sh@392 -- # return 1 00:14:18.451 15:01:34 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:14:18.451 15:01:34 -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:18.451 15:01:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:18.451 15:01:34 -- setup/common.sh@80 -- # echo 4294967296 00:14:18.451 15:01:34 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:14:18.451 15:01:34 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:18.451 15:01:34 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:14:18.451 15:01:34 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:18.451 15:01:34 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:14:18.451 15:01:34 -- setup/devices.sh@201 -- # ctrl=nvme0 00:14:18.451 15:01:34 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:14:18.451 15:01:34 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:18.451 15:01:34 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:14:18.451 15:01:34 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:14:18.451 15:01:34 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:14:18.708 No valid GPT data, bailing 00:14:18.708 15:01:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:14:18.708 15:01:34 -- scripts/common.sh@391 -- # pt= 00:14:18.708 15:01:34 -- scripts/common.sh@392 -- # return 1 00:14:18.708 15:01:34 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:14:18.708 15:01:34 -- setup/common.sh@76 -- # local dev=nvme0n2 00:14:18.708 15:01:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:14:18.708 15:01:34 -- setup/common.sh@80 -- # echo 4294967296 00:14:18.708 15:01:34 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:14:18.708 15:01:34 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:18.708 15:01:34 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:14:18.708 15:01:34 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:18.708 15:01:34 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:14:18.708 15:01:34 -- setup/devices.sh@201 -- # ctrl=nvme0 00:14:18.708 15:01:34 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:14:18.708 15:01:34 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:18.708 15:01:34 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:14:18.709 15:01:34 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:14:18.709 15:01:34 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:14:18.709 No valid GPT data, bailing 00:14:18.709 15:01:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:14:18.709 15:01:34 -- scripts/common.sh@391 -- # pt= 00:14:18.709 15:01:34 -- scripts/common.sh@392 -- # return 1 00:14:18.709 15:01:34 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:14:18.709 15:01:34 -- setup/common.sh@76 -- # local dev=nvme0n3 00:14:18.709 15:01:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:14:18.709 15:01:34 -- setup/common.sh@80 -- # echo 4294967296 00:14:18.709 15:01:34 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:14:18.709 15:01:34 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:18.709 15:01:34 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:14:18.709 15:01:34 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:18.709 15:01:34 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:14:18.709 15:01:34 -- setup/devices.sh@201 -- # ctrl=nvme1 00:14:18.709 15:01:34 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:14:18.709 15:01:34 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:14:18.709 15:01:34 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:14:18.709 15:01:34 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:14:18.709 15:01:34 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:14:18.709 No valid GPT data, bailing 00:14:18.709 15:01:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:14:18.709 15:01:34 -- scripts/common.sh@391 -- # pt= 00:14:18.709 15:01:34 -- scripts/common.sh@392 -- # return 1 00:14:18.709 15:01:34 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:14:18.709 15:01:34 -- setup/common.sh@76 -- # local dev=nvme1n1 00:14:18.709 15:01:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:14:18.709 15:01:34 -- setup/common.sh@80 -- # echo 5368709120 00:14:18.709 15:01:34 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:14:18.709 15:01:34 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:18.709 15:01:34 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:14:18.709 15:01:34 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:14:18.709 15:01:34 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:14:18.709 15:01:34 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:14:18.709 15:01:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:18.709 15:01:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:18.709 15:01:34 -- common/autotest_common.sh@10 -- # set +x 00:14:18.709 ************************************ 00:14:18.709 START TEST nvme_mount 00:14:18.709 ************************************ 00:14:18.709 15:01:34 -- common/autotest_common.sh@1111 -- # nvme_mount 00:14:18.709 15:01:34 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:14:18.709 15:01:34 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:14:18.709 15:01:34 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:18.709 15:01:34 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:18.709 15:01:34 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:14:18.709 15:01:34 -- setup/common.sh@39 -- # local disk=nvme0n1 00:14:18.709 15:01:34 -- setup/common.sh@40 -- # local part_no=1 00:14:18.709 15:01:34 -- setup/common.sh@41 -- # local size=1073741824 00:14:18.709 15:01:34 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:14:18.709 15:01:34 -- setup/common.sh@44 -- # parts=() 00:14:18.709 15:01:34 -- setup/common.sh@44 -- # local parts 00:14:18.709 15:01:34 -- setup/common.sh@46 -- # (( part = 1 )) 00:14:18.709 15:01:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:18.709 15:01:34 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:14:18.709 15:01:34 -- setup/common.sh@46 -- # (( part++ )) 00:14:18.709 15:01:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:18.709 15:01:34 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:14:18.709 15:01:34 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:14:18.709 15:01:34 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:14:20.087 Creating new GPT entries in memory. 00:14:20.087 GPT data structures destroyed! You may now partition the disk using fdisk or 00:14:20.087 other utilities. 00:14:20.087 15:01:35 -- setup/common.sh@57 -- # (( part = 1 )) 00:14:20.087 15:01:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:20.087 15:01:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:20.087 15:01:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:20.087 15:01:35 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:14:21.068 Creating new GPT entries in memory. 00:14:21.068 The operation has completed successfully. 00:14:21.068 15:01:36 -- setup/common.sh@57 -- # (( part++ )) 00:14:21.068 15:01:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:21.068 15:01:36 -- setup/common.sh@62 -- # wait 58257 00:14:21.068 15:01:36 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:21.068 15:01:36 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:14:21.068 15:01:36 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:21.068 15:01:36 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:14:21.068 15:01:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:14:21.068 15:01:36 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:21.068 15:01:36 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:21.068 15:01:36 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:21.068 15:01:36 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:14:21.068 15:01:36 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:21.068 15:01:36 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:21.068 15:01:36 -- setup/devices.sh@53 -- # local found=0 00:14:21.068 15:01:36 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:21.068 15:01:36 -- setup/devices.sh@56 -- # : 00:14:21.068 15:01:36 -- setup/devices.sh@59 -- # local pci status 00:14:21.068 15:01:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:21.068 15:01:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:21.068 15:01:36 -- setup/devices.sh@47 -- # setup output config 00:14:21.068 15:01:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:21.068 15:01:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:21.327 15:01:36 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:21.327 15:01:36 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:14:21.327 15:01:36 -- setup/devices.sh@63 -- # found=1 00:14:21.327 15:01:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:21.327 15:01:36 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:21.327 15:01:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:21.327 15:01:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:21.327 15:01:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:21.586 15:01:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:21.586 15:01:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:21.586 15:01:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:21.586 15:01:37 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:14:21.586 15:01:37 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:21.586 15:01:37 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:21.586 15:01:37 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:21.586 15:01:37 -- setup/devices.sh@110 -- # cleanup_nvme 00:14:21.586 15:01:37 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:21.586 15:01:37 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:21.586 15:01:37 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:21.586 15:01:37 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:14:21.586 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:21.586 15:01:37 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:21.586 15:01:37 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:21.845 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:21.845 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:21.845 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:21.845 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:21.845 15:01:37 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:14:21.845 15:01:37 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:14:21.845 15:01:37 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:21.845 15:01:37 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:14:21.845 15:01:37 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:14:22.105 15:01:37 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:22.105 15:01:37 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:22.105 15:01:37 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:22.105 15:01:37 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:14:22.105 15:01:37 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:22.105 15:01:37 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:22.105 15:01:37 -- setup/devices.sh@53 -- # local found=0 00:14:22.105 15:01:37 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:22.105 15:01:37 -- setup/devices.sh@56 -- # : 00:14:22.105 15:01:37 -- setup/devices.sh@59 -- # local pci status 00:14:22.105 15:01:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:22.105 15:01:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:22.105 15:01:37 -- setup/devices.sh@47 -- # setup output config 00:14:22.105 15:01:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:22.105 15:01:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:22.364 15:01:37 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:22.364 15:01:37 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:14:22.364 15:01:37 -- setup/devices.sh@63 -- # found=1 00:14:22.364 15:01:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:22.364 15:01:37 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:22.364 15:01:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:22.364 15:01:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:22.364 15:01:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:22.624 15:01:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:22.624 15:01:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:22.624 15:01:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:22.624 15:01:38 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:14:22.624 15:01:38 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:22.624 15:01:38 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:22.624 15:01:38 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:22.624 15:01:38 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:22.624 15:01:38 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:14:22.624 15:01:38 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:22.624 15:01:38 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:14:22.624 15:01:38 -- setup/devices.sh@50 -- # local mount_point= 00:14:22.624 15:01:38 -- setup/devices.sh@51 -- # local test_file= 00:14:22.624 15:01:38 -- setup/devices.sh@53 -- # local found=0 00:14:22.624 15:01:38 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:14:22.624 15:01:38 -- setup/devices.sh@59 -- # local pci status 00:14:22.624 15:01:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:22.624 15:01:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:22.624 15:01:38 -- setup/devices.sh@47 -- # setup output config 00:14:22.624 15:01:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:22.624 15:01:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:23.192 15:01:38 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:23.192 15:01:38 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:14:23.192 15:01:38 -- setup/devices.sh@63 -- # found=1 00:14:23.192 15:01:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:23.192 15:01:38 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:23.192 15:01:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:23.192 15:01:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:23.193 15:01:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:23.451 15:01:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:23.451 15:01:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:23.451 15:01:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:23.451 15:01:39 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:14:23.451 15:01:39 -- setup/devices.sh@68 -- # return 0 00:14:23.451 15:01:39 -- setup/devices.sh@128 -- # cleanup_nvme 00:14:23.451 15:01:39 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:23.451 15:01:39 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:23.451 15:01:39 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:23.451 15:01:39 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:23.451 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:23.451 00:14:23.451 real 0m4.659s 00:14:23.451 user 0m0.872s 00:14:23.451 sys 0m1.527s 00:14:23.451 15:01:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:23.451 15:01:39 -- common/autotest_common.sh@10 -- # set +x 00:14:23.451 ************************************ 00:14:23.451 END TEST nvme_mount 00:14:23.451 ************************************ 00:14:23.451 15:01:39 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:14:23.451 15:01:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:23.451 15:01:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:23.451 15:01:39 -- common/autotest_common.sh@10 -- # set +x 00:14:23.711 ************************************ 00:14:23.711 START TEST dm_mount 00:14:23.711 ************************************ 00:14:23.712 15:01:39 -- common/autotest_common.sh@1111 -- # dm_mount 00:14:23.712 15:01:39 -- setup/devices.sh@144 -- # pv=nvme0n1 00:14:23.712 15:01:39 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:14:23.712 15:01:39 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:14:23.712 15:01:39 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:14:23.712 15:01:39 -- setup/common.sh@39 -- # local disk=nvme0n1 00:14:23.712 15:01:39 -- setup/common.sh@40 -- # local part_no=2 00:14:23.712 15:01:39 -- setup/common.sh@41 -- # local size=1073741824 00:14:23.712 15:01:39 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:14:23.712 15:01:39 -- setup/common.sh@44 -- # parts=() 00:14:23.712 15:01:39 -- setup/common.sh@44 -- # local parts 00:14:23.712 15:01:39 -- setup/common.sh@46 -- # (( part = 1 )) 00:14:23.712 15:01:39 -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:23.712 15:01:39 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:14:23.712 15:01:39 -- setup/common.sh@46 -- # (( part++ )) 00:14:23.712 15:01:39 -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:23.712 15:01:39 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:14:23.712 15:01:39 -- setup/common.sh@46 -- # (( part++ )) 00:14:23.712 15:01:39 -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:23.712 15:01:39 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:14:23.712 15:01:39 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:14:23.712 15:01:39 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:14:24.648 Creating new GPT entries in memory. 00:14:24.648 GPT data structures destroyed! You may now partition the disk using fdisk or 00:14:24.648 other utilities. 00:14:24.648 15:01:40 -- setup/common.sh@57 -- # (( part = 1 )) 00:14:24.648 15:01:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:24.648 15:01:40 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:24.648 15:01:40 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:24.648 15:01:40 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:14:26.036 Creating new GPT entries in memory. 00:14:26.036 The operation has completed successfully. 00:14:26.036 15:01:41 -- setup/common.sh@57 -- # (( part++ )) 00:14:26.036 15:01:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:26.036 15:01:41 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:26.036 15:01:41 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:26.036 15:01:41 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:14:26.972 The operation has completed successfully. 00:14:26.972 15:01:42 -- setup/common.sh@57 -- # (( part++ )) 00:14:26.972 15:01:42 -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:26.972 15:01:42 -- setup/common.sh@62 -- # wait 58700 00:14:26.972 15:01:42 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:14:26.972 15:01:42 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:26.972 15:01:42 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:26.973 15:01:42 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:14:26.973 15:01:42 -- setup/devices.sh@160 -- # for t in {1..5} 00:14:26.973 15:01:42 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:26.973 15:01:42 -- setup/devices.sh@161 -- # break 00:14:26.973 15:01:42 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:26.973 15:01:42 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:14:26.973 15:01:42 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:14:26.973 15:01:42 -- setup/devices.sh@166 -- # dm=dm-0 00:14:26.973 15:01:42 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:14:26.973 15:01:42 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:14:26.973 15:01:42 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:26.973 15:01:42 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:14:26.973 15:01:42 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:26.973 15:01:42 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:26.973 15:01:42 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:14:26.973 15:01:42 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:26.973 15:01:42 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:26.973 15:01:42 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:26.973 15:01:42 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:14:26.973 15:01:42 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:26.973 15:01:42 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:26.973 15:01:42 -- setup/devices.sh@53 -- # local found=0 00:14:26.973 15:01:42 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:14:26.973 15:01:42 -- setup/devices.sh@56 -- # : 00:14:26.973 15:01:42 -- setup/devices.sh@59 -- # local pci status 00:14:26.973 15:01:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:26.973 15:01:42 -- setup/devices.sh@47 -- # setup output config 00:14:26.973 15:01:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:26.973 15:01:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:26.973 15:01:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:27.232 15:01:42 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:27.232 15:01:42 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:14:27.232 15:01:42 -- setup/devices.sh@63 -- # found=1 00:14:27.232 15:01:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:27.232 15:01:42 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:27.232 15:01:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:27.491 15:01:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:27.491 15:01:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:27.491 15:01:43 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:27.491 15:01:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:27.491 15:01:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:27.491 15:01:43 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:14:27.492 15:01:43 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:27.492 15:01:43 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:14:27.492 15:01:43 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:27.492 15:01:43 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:27.492 15:01:43 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:14:27.492 15:01:43 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:27.492 15:01:43 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:14:27.492 15:01:43 -- setup/devices.sh@50 -- # local mount_point= 00:14:27.492 15:01:43 -- setup/devices.sh@51 -- # local test_file= 00:14:27.492 15:01:43 -- setup/devices.sh@53 -- # local found=0 00:14:27.492 15:01:43 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:14:27.492 15:01:43 -- setup/devices.sh@59 -- # local pci status 00:14:27.492 15:01:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:27.492 15:01:43 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:27.492 15:01:43 -- setup/devices.sh@47 -- # setup output config 00:14:27.492 15:01:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:27.492 15:01:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:27.750 15:01:43 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:27.750 15:01:43 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:14:27.750 15:01:43 -- setup/devices.sh@63 -- # found=1 00:14:27.750 15:01:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:27.750 15:01:43 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:27.750 15:01:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:28.009 15:01:43 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:28.009 15:01:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:28.268 15:01:43 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:28.268 15:01:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:28.268 15:01:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:28.268 15:01:43 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:14:28.268 15:01:43 -- setup/devices.sh@68 -- # return 0 00:14:28.268 15:01:43 -- setup/devices.sh@187 -- # cleanup_dm 00:14:28.268 15:01:43 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:28.268 15:01:43 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:14:28.268 15:01:43 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:14:28.268 15:01:43 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:28.268 15:01:43 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:14:28.268 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:28.268 15:01:43 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:14:28.268 15:01:43 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:14:28.268 00:14:28.268 real 0m4.650s 00:14:28.268 user 0m0.549s 00:14:28.268 sys 0m1.038s 00:14:28.268 15:01:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:28.268 15:01:43 -- common/autotest_common.sh@10 -- # set +x 00:14:28.268 ************************************ 00:14:28.268 END TEST dm_mount 00:14:28.268 ************************************ 00:14:28.268 15:01:43 -- setup/devices.sh@1 -- # cleanup 00:14:28.268 15:01:43 -- setup/devices.sh@11 -- # cleanup_nvme 00:14:28.268 15:01:43 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:28.268 15:01:43 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:28.268 15:01:43 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:14:28.268 15:01:43 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:28.268 15:01:43 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:28.528 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:28.528 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:28.528 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:28.528 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:28.788 15:01:44 -- setup/devices.sh@12 -- # cleanup_dm 00:14:28.788 15:01:44 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:28.788 15:01:44 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:14:28.788 15:01:44 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:28.788 15:01:44 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:14:28.788 15:01:44 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:14:28.788 15:01:44 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:14:28.788 00:14:28.788 real 0m11.028s 00:14:28.788 user 0m2.122s 00:14:28.788 sys 0m3.297s 00:14:28.788 15:01:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:28.788 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:14:28.788 ************************************ 00:14:28.788 END TEST devices 00:14:28.788 ************************************ 00:14:28.788 ************************************ 00:14:28.788 END TEST setup.sh 00:14:28.788 ************************************ 00:14:28.788 00:14:28.788 real 0m27.545s 00:14:28.788 user 0m8.673s 00:14:28.788 sys 0m13.396s 00:14:28.788 15:01:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:28.788 15:01:44 -- common/autotest_common.sh@10 -- # set +x 00:14:28.788 15:01:44 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:29.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:29.728 Hugepages 00:14:29.728 node hugesize free / total 00:14:29.728 node0 1048576kB 0 / 0 00:14:29.728 node0 2048kB 2048 / 2048 00:14:29.728 00:14:29.728 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:29.729 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:14:29.729 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:14:29.988 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:14:29.988 15:01:45 -- spdk/autotest.sh@130 -- # uname -s 00:14:29.988 15:01:45 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:14:29.988 15:01:45 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:14:29.988 15:01:45 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:30.558 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:30.816 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:30.816 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:30.816 15:01:46 -- common/autotest_common.sh@1518 -- # sleep 1 00:14:32.195 15:01:47 -- common/autotest_common.sh@1519 -- # bdfs=() 00:14:32.195 15:01:47 -- common/autotest_common.sh@1519 -- # local bdfs 00:14:32.195 15:01:47 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:14:32.195 15:01:47 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:14:32.195 15:01:47 -- common/autotest_common.sh@1499 -- # bdfs=() 00:14:32.195 15:01:47 -- common/autotest_common.sh@1499 -- # local bdfs 00:14:32.195 15:01:47 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:32.195 15:01:47 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:32.195 15:01:47 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:14:32.195 15:01:47 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:14:32.195 15:01:47 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:14:32.195 15:01:47 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:32.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:32.455 Waiting for block devices as requested 00:14:32.455 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:32.714 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:32.714 15:01:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:14:32.714 15:01:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:14:32.714 15:01:48 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:14:32.714 15:01:48 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:14:32.714 15:01:48 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:32.714 15:01:48 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:14:32.715 15:01:48 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:32.715 15:01:48 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:14:32.715 15:01:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:14:32.715 15:01:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:14:32.715 15:01:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:14:32.715 15:01:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:14:32.715 15:01:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:14:32.715 15:01:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:14:32.715 15:01:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:14:32.715 15:01:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:14:32.715 15:01:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:14:32.715 15:01:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:14:32.715 15:01:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:14:32.715 15:01:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:14:32.715 15:01:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:14:32.715 15:01:48 -- common/autotest_common.sh@1543 -- # continue 00:14:32.715 15:01:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:14:32.715 15:01:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:14:32.715 15:01:48 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:14:32.715 15:01:48 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:14:32.715 15:01:48 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:32.715 15:01:48 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:14:32.715 15:01:48 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:32.715 15:01:48 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:14:32.715 15:01:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:14:32.715 15:01:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:14:32.715 15:01:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:14:32.715 15:01:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:14:32.715 15:01:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:14:32.715 15:01:48 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:14:32.715 15:01:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:14:32.715 15:01:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:14:32.715 15:01:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:14:32.715 15:01:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:14:32.715 15:01:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:14:32.974 15:01:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:14:32.975 15:01:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:14:32.975 15:01:48 -- common/autotest_common.sh@1543 -- # continue 00:14:32.975 15:01:48 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:14:32.975 15:01:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:32.975 15:01:48 -- common/autotest_common.sh@10 -- # set +x 00:14:32.975 15:01:48 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:14:32.975 15:01:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:32.975 15:01:48 -- common/autotest_common.sh@10 -- # set +x 00:14:32.975 15:01:48 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:33.543 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:33.803 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:33.803 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:33.803 15:01:49 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:14:33.803 15:01:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:33.803 15:01:49 -- common/autotest_common.sh@10 -- # set +x 00:14:34.061 15:01:49 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:14:34.061 15:01:49 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:14:34.061 15:01:49 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:14:34.061 15:01:49 -- common/autotest_common.sh@1563 -- # bdfs=() 00:14:34.061 15:01:49 -- common/autotest_common.sh@1563 -- # local bdfs 00:14:34.061 15:01:49 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:14:34.061 15:01:49 -- common/autotest_common.sh@1499 -- # bdfs=() 00:14:34.061 15:01:49 -- common/autotest_common.sh@1499 -- # local bdfs 00:14:34.061 15:01:49 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:34.061 15:01:49 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:34.061 15:01:49 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:14:34.061 15:01:49 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:14:34.061 15:01:49 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:14:34.061 15:01:49 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:14:34.061 15:01:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:14:34.061 15:01:49 -- common/autotest_common.sh@1566 -- # device=0x0010 00:14:34.061 15:01:49 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:34.061 15:01:49 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:14:34.062 15:01:49 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:14:34.062 15:01:49 -- common/autotest_common.sh@1566 -- # device=0x0010 00:14:34.062 15:01:49 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:34.062 15:01:49 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:14:34.062 15:01:49 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:14:34.062 15:01:49 -- common/autotest_common.sh@1579 -- # return 0 00:14:34.062 15:01:49 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:14:34.062 15:01:49 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:14:34.062 15:01:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:14:34.062 15:01:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:14:34.062 15:01:49 -- spdk/autotest.sh@162 -- # timing_enter lib 00:14:34.062 15:01:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:34.062 15:01:49 -- common/autotest_common.sh@10 -- # set +x 00:14:34.062 15:01:49 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:34.062 15:01:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:34.062 15:01:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:34.062 15:01:49 -- common/autotest_common.sh@10 -- # set +x 00:14:34.062 ************************************ 00:14:34.062 START TEST env 00:14:34.062 ************************************ 00:14:34.062 15:01:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:34.321 * Looking for test storage... 00:14:34.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:14:34.321 15:01:49 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:34.321 15:01:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:34.321 15:01:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:34.321 15:01:49 -- common/autotest_common.sh@10 -- # set +x 00:14:34.321 ************************************ 00:14:34.321 START TEST env_memory 00:14:34.321 ************************************ 00:14:34.321 15:01:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:34.321 00:14:34.321 00:14:34.321 CUnit - A unit testing framework for C - Version 2.1-3 00:14:34.321 http://cunit.sourceforge.net/ 00:14:34.321 00:14:34.321 00:14:34.321 Suite: memory 00:14:34.321 Test: alloc and free memory map ...[2024-04-18 15:01:49.982232] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:14:34.321 passed 00:14:34.580 Test: mem map translation ...[2024-04-18 15:01:50.029035] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:14:34.580 [2024-04-18 15:01:50.029593] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:14:34.580 [2024-04-18 15:01:50.029657] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:14:34.580 [2024-04-18 15:01:50.029668] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:14:34.580 passed 00:14:34.580 Test: mem map registration ...[2024-04-18 15:01:50.069352] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:14:34.580 [2024-04-18 15:01:50.069403] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:14:34.580 passed 00:14:34.580 Test: mem map adjacent registrations ...passed 00:14:34.580 00:14:34.580 Run Summary: Type Total Ran Passed Failed Inactive 00:14:34.580 suites 1 1 n/a 0 0 00:14:34.580 tests 4 4 4 0 0 00:14:34.580 asserts 152 152 152 0 n/a 00:14:34.580 00:14:34.580 Elapsed time = 0.156 seconds 00:14:34.580 ************************************ 00:14:34.580 END TEST env_memory 00:14:34.580 ************************************ 00:14:34.580 00:14:34.580 real 0m0.188s 00:14:34.580 user 0m0.157s 00:14:34.580 sys 0m0.017s 00:14:34.580 15:01:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:34.580 15:01:50 -- common/autotest_common.sh@10 -- # set +x 00:14:34.580 15:01:50 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:34.580 15:01:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:34.580 15:01:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:34.580 15:01:50 -- common/autotest_common.sh@10 -- # set +x 00:14:34.580 ************************************ 00:14:34.580 START TEST env_vtophys 00:14:34.580 ************************************ 00:14:34.580 15:01:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:34.840 EAL: lib.eal log level changed from notice to debug 00:14:34.840 EAL: Detected lcore 0 as core 0 on socket 0 00:14:34.840 EAL: Detected lcore 1 as core 0 on socket 0 00:14:34.840 EAL: Detected lcore 2 as core 0 on socket 0 00:14:34.841 EAL: Detected lcore 3 as core 0 on socket 0 00:14:34.841 EAL: Detected lcore 4 as core 0 on socket 0 00:14:34.841 EAL: Detected lcore 5 as core 0 on socket 0 00:14:34.841 EAL: Detected lcore 6 as core 0 on socket 0 00:14:34.841 EAL: Detected lcore 7 as core 0 on socket 0 00:14:34.841 EAL: Detected lcore 8 as core 0 on socket 0 00:14:34.841 EAL: Detected lcore 9 as core 0 on socket 0 00:14:34.841 EAL: Maximum logical cores by configuration: 128 00:14:34.841 EAL: Detected CPU lcores: 10 00:14:34.841 EAL: Detected NUMA nodes: 1 00:14:34.841 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:14:34.841 EAL: Detected shared linkage of DPDK 00:14:34.841 EAL: No shared files mode enabled, IPC will be disabled 00:14:34.841 EAL: Selected IOVA mode 'PA' 00:14:34.841 EAL: Probing VFIO support... 00:14:34.841 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:34.841 EAL: VFIO modules not loaded, skipping VFIO support... 00:14:34.841 EAL: Ask a virtual area of 0x2e000 bytes 00:14:34.841 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:14:34.841 EAL: Setting up physically contiguous memory... 00:14:34.841 EAL: Setting maximum number of open files to 524288 00:14:34.841 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:14:34.841 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:14:34.841 EAL: Ask a virtual area of 0x61000 bytes 00:14:34.841 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:14:34.841 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:34.841 EAL: Ask a virtual area of 0x400000000 bytes 00:14:34.841 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:14:34.841 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:14:34.841 EAL: Ask a virtual area of 0x61000 bytes 00:14:34.841 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:14:34.841 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:34.841 EAL: Ask a virtual area of 0x400000000 bytes 00:14:34.841 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:14:34.841 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:14:34.841 EAL: Ask a virtual area of 0x61000 bytes 00:14:34.841 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:14:34.841 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:34.841 EAL: Ask a virtual area of 0x400000000 bytes 00:14:34.841 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:14:34.841 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:14:34.841 EAL: Ask a virtual area of 0x61000 bytes 00:14:34.841 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:14:34.841 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:34.841 EAL: Ask a virtual area of 0x400000000 bytes 00:14:34.841 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:14:34.841 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:14:34.841 EAL: Hugepages will be freed exactly as allocated. 00:14:34.841 EAL: No shared files mode enabled, IPC is disabled 00:14:34.841 EAL: No shared files mode enabled, IPC is disabled 00:14:34.841 EAL: TSC frequency is ~2490000 KHz 00:14:34.841 EAL: Main lcore 0 is ready (tid=7efd25ce5a00;cpuset=[0]) 00:14:34.841 EAL: Trying to obtain current memory policy. 00:14:34.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:34.841 EAL: Restoring previous memory policy: 0 00:14:34.841 EAL: request: mp_malloc_sync 00:14:34.841 EAL: No shared files mode enabled, IPC is disabled 00:14:34.841 EAL: Heap on socket 0 was expanded by 2MB 00:14:34.841 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:34.841 EAL: No PCI address specified using 'addr=' in: bus=pci 00:14:34.841 EAL: Mem event callback 'spdk:(nil)' registered 00:14:34.841 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:14:34.841 00:14:34.841 00:14:34.841 CUnit - A unit testing framework for C - Version 2.1-3 00:14:34.841 http://cunit.sourceforge.net/ 00:14:34.841 00:14:34.841 00:14:34.841 Suite: components_suite 00:14:34.841 Test: vtophys_malloc_test ...passed 00:14:34.841 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:14:34.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:34.841 EAL: Restoring previous memory policy: 4 00:14:34.841 EAL: Calling mem event callback 'spdk:(nil)' 00:14:34.841 EAL: request: mp_malloc_sync 00:14:34.841 EAL: No shared files mode enabled, IPC is disabled 00:14:34.841 EAL: Heap on socket 0 was expanded by 4MB 00:14:34.841 EAL: Calling mem event callback 'spdk:(nil)' 00:14:34.841 EAL: request: mp_malloc_sync 00:14:34.841 EAL: No shared files mode enabled, IPC is disabled 00:14:34.841 EAL: Heap on socket 0 was shrunk by 4MB 00:14:34.841 EAL: Trying to obtain current memory policy. 00:14:34.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:34.841 EAL: Restoring previous memory policy: 4 00:14:34.841 EAL: Calling mem event callback 'spdk:(nil)' 00:14:34.841 EAL: request: mp_malloc_sync 00:14:34.841 EAL: No shared files mode enabled, IPC is disabled 00:14:34.841 EAL: Heap on socket 0 was expanded by 6MB 00:14:34.841 EAL: Calling mem event callback 'spdk:(nil)' 00:14:34.841 EAL: request: mp_malloc_sync 00:14:34.841 EAL: No shared files mode enabled, IPC is disabled 00:14:34.841 EAL: Heap on socket 0 was shrunk by 6MB 00:14:34.841 EAL: Trying to obtain current memory policy. 00:14:34.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:34.841 EAL: Restoring previous memory policy: 4 00:14:34.841 EAL: Calling mem event callback 'spdk:(nil)' 00:14:34.841 EAL: request: mp_malloc_sync 00:14:34.841 EAL: No shared files mode enabled, IPC is disabled 00:14:34.841 EAL: Heap on socket 0 was expanded by 10MB 00:14:34.841 EAL: Calling mem event callback 'spdk:(nil)' 00:14:34.841 EAL: request: mp_malloc_sync 00:14:34.841 EAL: No shared files mode enabled, IPC is disabled 00:14:34.841 EAL: Heap on socket 0 was shrunk by 10MB 00:14:34.841 EAL: Trying to obtain current memory policy. 00:14:34.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:34.841 EAL: Restoring previous memory policy: 4 00:14:34.841 EAL: Calling mem event callback 'spdk:(nil)' 00:14:34.841 EAL: request: mp_malloc_sync 00:14:34.841 EAL: No shared files mode enabled, IPC is disabled 00:14:34.841 EAL: Heap on socket 0 was expanded by 18MB 00:14:34.841 EAL: Calling mem event callback 'spdk:(nil)' 00:14:34.841 EAL: request: mp_malloc_sync 00:14:34.841 EAL: No shared files mode enabled, IPC is disabled 00:14:34.841 EAL: Heap on socket 0 was shrunk by 18MB 00:14:34.841 EAL: Trying to obtain current memory policy. 00:14:34.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:34.841 EAL: Restoring previous memory policy: 4 00:14:34.841 EAL: Calling mem event callback 'spdk:(nil)' 00:14:34.841 EAL: request: mp_malloc_sync 00:14:34.841 EAL: No shared files mode enabled, IPC is disabled 00:14:34.841 EAL: Heap on socket 0 was expanded by 34MB 00:14:34.841 EAL: Calling mem event callback 'spdk:(nil)' 00:14:34.841 EAL: request: mp_malloc_sync 00:14:34.841 EAL: No shared files mode enabled, IPC is disabled 00:14:34.841 EAL: Heap on socket 0 was shrunk by 34MB 00:14:34.841 EAL: Trying to obtain current memory policy. 00:14:34.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:34.841 EAL: Restoring previous memory policy: 4 00:14:34.841 EAL: Calling mem event callback 'spdk:(nil)' 00:14:34.841 EAL: request: mp_malloc_sync 00:14:34.841 EAL: No shared files mode enabled, IPC is disabled 00:14:34.841 EAL: Heap on socket 0 was expanded by 66MB 00:14:34.841 EAL: Calling mem event callback 'spdk:(nil)' 00:14:34.841 EAL: request: mp_malloc_sync 00:14:34.841 EAL: No shared files mode enabled, IPC is disabled 00:14:34.841 EAL: Heap on socket 0 was shrunk by 66MB 00:14:34.841 EAL: Trying to obtain current memory policy. 00:14:34.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:35.101 EAL: Restoring previous memory policy: 4 00:14:35.101 EAL: Calling mem event callback 'spdk:(nil)' 00:14:35.101 EAL: request: mp_malloc_sync 00:14:35.101 EAL: No shared files mode enabled, IPC is disabled 00:14:35.101 EAL: Heap on socket 0 was expanded by 130MB 00:14:35.101 EAL: Calling mem event callback 'spdk:(nil)' 00:14:35.101 EAL: request: mp_malloc_sync 00:14:35.101 EAL: No shared files mode enabled, IPC is disabled 00:14:35.101 EAL: Heap on socket 0 was shrunk by 130MB 00:14:35.102 EAL: Trying to obtain current memory policy. 00:14:35.102 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:35.102 EAL: Restoring previous memory policy: 4 00:14:35.102 EAL: Calling mem event callback 'spdk:(nil)' 00:14:35.102 EAL: request: mp_malloc_sync 00:14:35.102 EAL: No shared files mode enabled, IPC is disabled 00:14:35.102 EAL: Heap on socket 0 was expanded by 258MB 00:14:35.102 EAL: Calling mem event callback 'spdk:(nil)' 00:14:35.360 EAL: request: mp_malloc_sync 00:14:35.360 EAL: No shared files mode enabled, IPC is disabled 00:14:35.360 EAL: Heap on socket 0 was shrunk by 258MB 00:14:35.360 EAL: Trying to obtain current memory policy. 00:14:35.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:35.360 EAL: Restoring previous memory policy: 4 00:14:35.360 EAL: Calling mem event callback 'spdk:(nil)' 00:14:35.360 EAL: request: mp_malloc_sync 00:14:35.360 EAL: No shared files mode enabled, IPC is disabled 00:14:35.360 EAL: Heap on socket 0 was expanded by 514MB 00:14:35.619 EAL: Calling mem event callback 'spdk:(nil)' 00:14:35.619 EAL: request: mp_malloc_sync 00:14:35.619 EAL: No shared files mode enabled, IPC is disabled 00:14:35.619 EAL: Heap on socket 0 was shrunk by 514MB 00:14:35.619 EAL: Trying to obtain current memory policy. 00:14:35.619 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:35.879 EAL: Restoring previous memory policy: 4 00:14:35.879 EAL: Calling mem event callback 'spdk:(nil)' 00:14:35.879 EAL: request: mp_malloc_sync 00:14:35.879 EAL: No shared files mode enabled, IPC is disabled 00:14:35.879 EAL: Heap on socket 0 was expanded by 1026MB 00:14:36.140 EAL: Calling mem event callback 'spdk:(nil)' 00:14:36.140 passed 00:14:36.140 00:14:36.140 Run Summary: Type Total Ran Passed Failed Inactive 00:14:36.140 suites 1 1 n/a 0 0 00:14:36.140 tests 2 2 2 0 0 00:14:36.140 asserts 5246 5246 5246 0 n/a 00:14:36.140 00:14:36.140 Elapsed time = 1.333 seconds 00:14:36.140 EAL: request: mp_malloc_sync 00:14:36.140 EAL: No shared files mode enabled, IPC is disabled 00:14:36.140 EAL: Heap on socket 0 was shrunk by 1026MB 00:14:36.140 EAL: Calling mem event callback 'spdk:(nil)' 00:14:36.140 EAL: request: mp_malloc_sync 00:14:36.140 EAL: No shared files mode enabled, IPC is disabled 00:14:36.140 EAL: Heap on socket 0 was shrunk by 2MB 00:14:36.140 EAL: No shared files mode enabled, IPC is disabled 00:14:36.140 EAL: No shared files mode enabled, IPC is disabled 00:14:36.140 EAL: No shared files mode enabled, IPC is disabled 00:14:36.140 ************************************ 00:14:36.140 END TEST env_vtophys 00:14:36.140 ************************************ 00:14:36.140 00:14:36.140 real 0m1.534s 00:14:36.140 user 0m0.861s 00:14:36.140 sys 0m0.545s 00:14:36.140 15:01:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:36.140 15:01:51 -- common/autotest_common.sh@10 -- # set +x 00:14:36.400 15:01:51 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:14:36.400 15:01:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:36.400 15:01:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:36.400 15:01:51 -- common/autotest_common.sh@10 -- # set +x 00:14:36.400 ************************************ 00:14:36.400 START TEST env_pci 00:14:36.400 ************************************ 00:14:36.400 15:01:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:14:36.400 00:14:36.400 00:14:36.400 CUnit - A unit testing framework for C - Version 2.1-3 00:14:36.400 http://cunit.sourceforge.net/ 00:14:36.400 00:14:36.400 00:14:36.400 Suite: pci 00:14:36.400 Test: pci_hook ...[2024-04-18 15:01:51.980601] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 59927 has claimed it 00:14:36.400 passed 00:14:36.400 00:14:36.400 Run Summary: Type Total Ran Passed Failed Inactive 00:14:36.400 suites 1 1 n/a 0 0 00:14:36.400 tests 1 1 1 0 0 00:14:36.400 asserts 25 25 25 0 n/a 00:14:36.400 00:14:36.400 Elapsed time = 0.003 seconds 00:14:36.400 EAL: Cannot find device (10000:00:01.0) 00:14:36.400 EAL: Failed to attach device on primary process 00:14:36.400 00:14:36.400 real 0m0.029s 00:14:36.400 user 0m0.013s 00:14:36.400 sys 0m0.015s 00:14:36.400 15:01:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:36.400 ************************************ 00:14:36.400 END TEST env_pci 00:14:36.400 ************************************ 00:14:36.400 15:01:51 -- common/autotest_common.sh@10 -- # set +x 00:14:36.400 15:01:52 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:14:36.400 15:01:52 -- env/env.sh@15 -- # uname 00:14:36.400 15:01:52 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:14:36.400 15:01:52 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:14:36.400 15:01:52 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:14:36.400 15:01:52 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:36.400 15:01:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:36.400 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:14:36.659 ************************************ 00:14:36.659 START TEST env_dpdk_post_init 00:14:36.659 ************************************ 00:14:36.659 15:01:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:14:36.659 EAL: Detected CPU lcores: 10 00:14:36.659 EAL: Detected NUMA nodes: 1 00:14:36.659 EAL: Detected shared linkage of DPDK 00:14:36.659 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:14:36.659 EAL: Selected IOVA mode 'PA' 00:14:36.659 TELEMETRY: No legacy callbacks, legacy socket not created 00:14:36.659 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:14:36.659 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:14:36.659 Starting DPDK initialization... 00:14:36.659 Starting SPDK post initialization... 00:14:36.659 SPDK NVMe probe 00:14:36.659 Attaching to 0000:00:10.0 00:14:36.659 Attaching to 0000:00:11.0 00:14:36.659 Attached to 0000:00:10.0 00:14:36.659 Attached to 0000:00:11.0 00:14:36.659 Cleaning up... 00:14:36.659 ************************************ 00:14:36.659 END TEST env_dpdk_post_init 00:14:36.659 ************************************ 00:14:36.659 00:14:36.659 real 0m0.188s 00:14:36.659 user 0m0.049s 00:14:36.659 sys 0m0.038s 00:14:36.659 15:01:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:36.659 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:14:36.918 15:01:52 -- env/env.sh@26 -- # uname 00:14:36.918 15:01:52 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:14:36.918 15:01:52 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:14:36.918 15:01:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:36.918 15:01:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:36.918 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:14:36.918 ************************************ 00:14:36.918 START TEST env_mem_callbacks 00:14:36.918 ************************************ 00:14:36.918 15:01:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:14:36.918 EAL: Detected CPU lcores: 10 00:14:36.918 EAL: Detected NUMA nodes: 1 00:14:36.918 EAL: Detected shared linkage of DPDK 00:14:36.918 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:14:36.918 EAL: Selected IOVA mode 'PA' 00:14:37.177 TELEMETRY: No legacy callbacks, legacy socket not created 00:14:37.177 00:14:37.177 00:14:37.177 CUnit - A unit testing framework for C - Version 2.1-3 00:14:37.177 http://cunit.sourceforge.net/ 00:14:37.177 00:14:37.177 00:14:37.177 Suite: memory 00:14:37.177 Test: test ... 00:14:37.177 register 0x200000200000 2097152 00:14:37.177 malloc 3145728 00:14:37.177 register 0x200000400000 4194304 00:14:37.177 buf 0x200000500000 len 3145728 PASSED 00:14:37.177 malloc 64 00:14:37.177 buf 0x2000004fff40 len 64 PASSED 00:14:37.177 malloc 4194304 00:14:37.177 register 0x200000800000 6291456 00:14:37.177 buf 0x200000a00000 len 4194304 PASSED 00:14:37.177 free 0x200000500000 3145728 00:14:37.177 free 0x2000004fff40 64 00:14:37.177 unregister 0x200000400000 4194304 PASSED 00:14:37.177 free 0x200000a00000 4194304 00:14:37.177 unregister 0x200000800000 6291456 PASSED 00:14:37.177 malloc 8388608 00:14:37.177 register 0x200000400000 10485760 00:14:37.177 buf 0x200000600000 len 8388608 PASSED 00:14:37.177 free 0x200000600000 8388608 00:14:37.177 unregister 0x200000400000 10485760 PASSED 00:14:37.177 passed 00:14:37.177 00:14:37.177 Run Summary: Type Total Ran Passed Failed Inactive 00:14:37.177 suites 1 1 n/a 0 0 00:14:37.177 tests 1 1 1 0 0 00:14:37.177 asserts 15 15 15 0 n/a 00:14:37.177 00:14:37.177 Elapsed time = 0.009 seconds 00:14:37.177 00:14:37.177 real 0m0.158s 00:14:37.177 user 0m0.019s 00:14:37.177 sys 0m0.035s 00:14:37.177 15:01:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:37.177 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:14:37.177 ************************************ 00:14:37.177 END TEST env_mem_callbacks 00:14:37.177 ************************************ 00:14:37.177 ************************************ 00:14:37.177 END TEST env 00:14:37.177 ************************************ 00:14:37.177 00:14:37.177 real 0m2.977s 00:14:37.177 user 0m1.393s 00:14:37.177 sys 0m1.148s 00:14:37.177 15:01:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:37.177 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:14:37.177 15:01:52 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:14:37.177 15:01:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:37.177 15:01:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:37.177 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:14:37.177 ************************************ 00:14:37.177 START TEST rpc 00:14:37.177 ************************************ 00:14:37.177 15:01:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:14:37.435 * Looking for test storage... 00:14:37.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:14:37.435 15:01:52 -- rpc/rpc.sh@65 -- # spdk_pid=60056 00:14:37.435 15:01:52 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:14:37.435 15:01:52 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:37.435 15:01:52 -- rpc/rpc.sh@67 -- # waitforlisten 60056 00:14:37.435 15:01:52 -- common/autotest_common.sh@817 -- # '[' -z 60056 ']' 00:14:37.435 15:01:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.435 15:01:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:37.435 15:01:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.435 15:01:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:37.435 15:01:52 -- common/autotest_common.sh@10 -- # set +x 00:14:37.435 [2024-04-18 15:01:53.030309] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:37.435 [2024-04-18 15:01:53.030899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60056 ] 00:14:37.693 [2024-04-18 15:01:53.172278] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.693 [2024-04-18 15:01:53.262699] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:14:37.693 [2024-04-18 15:01:53.262763] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60056' to capture a snapshot of events at runtime. 00:14:37.693 [2024-04-18 15:01:53.262774] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.693 [2024-04-18 15:01:53.262783] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.693 [2024-04-18 15:01:53.262790] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60056 for offline analysis/debug. 00:14:37.693 [2024-04-18 15:01:53.262840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.259 15:01:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:38.259 15:01:53 -- common/autotest_common.sh@850 -- # return 0 00:14:38.259 15:01:53 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:14:38.259 15:01:53 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:14:38.259 15:01:53 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:14:38.259 15:01:53 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:14:38.259 15:01:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:38.259 15:01:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:38.259 15:01:53 -- common/autotest_common.sh@10 -- # set +x 00:14:38.519 ************************************ 00:14:38.519 START TEST rpc_integrity 00:14:38.519 ************************************ 00:14:38.519 15:01:53 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:14:38.519 15:01:53 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:38.519 15:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.519 15:01:53 -- common/autotest_common.sh@10 -- # set +x 00:14:38.519 15:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.519 15:01:54 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:14:38.519 15:01:54 -- rpc/rpc.sh@13 -- # jq length 00:14:38.519 15:01:54 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:14:38.519 15:01:54 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:14:38.519 15:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.519 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.519 15:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.519 15:01:54 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:14:38.519 15:01:54 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:14:38.519 15:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.519 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.519 15:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.519 15:01:54 -- rpc/rpc.sh@16 -- # bdevs='[ 00:14:38.519 { 00:14:38.519 "aliases": [ 00:14:38.519 "87b2a34e-eec6-4b44-8c7a-f2f3a2c279cc" 00:14:38.519 ], 00:14:38.519 "assigned_rate_limits": { 00:14:38.519 "r_mbytes_per_sec": 0, 00:14:38.519 "rw_ios_per_sec": 0, 00:14:38.519 "rw_mbytes_per_sec": 0, 00:14:38.519 "w_mbytes_per_sec": 0 00:14:38.519 }, 00:14:38.519 "block_size": 512, 00:14:38.519 "claimed": false, 00:14:38.519 "driver_specific": {}, 00:14:38.519 "memory_domains": [ 00:14:38.519 { 00:14:38.519 "dma_device_id": "system", 00:14:38.519 "dma_device_type": 1 00:14:38.519 }, 00:14:38.519 { 00:14:38.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.519 "dma_device_type": 2 00:14:38.519 } 00:14:38.519 ], 00:14:38.519 "name": "Malloc0", 00:14:38.519 "num_blocks": 16384, 00:14:38.519 "product_name": "Malloc disk", 00:14:38.519 "supported_io_types": { 00:14:38.519 "abort": true, 00:14:38.519 "compare": false, 00:14:38.519 "compare_and_write": false, 00:14:38.519 "flush": true, 00:14:38.519 "nvme_admin": false, 00:14:38.519 "nvme_io": false, 00:14:38.519 "read": true, 00:14:38.519 "reset": true, 00:14:38.519 "unmap": true, 00:14:38.519 "write": true, 00:14:38.519 "write_zeroes": true 00:14:38.519 }, 00:14:38.519 "uuid": "87b2a34e-eec6-4b44-8c7a-f2f3a2c279cc", 00:14:38.519 "zoned": false 00:14:38.519 } 00:14:38.519 ]' 00:14:38.519 15:01:54 -- rpc/rpc.sh@17 -- # jq length 00:14:38.519 15:01:54 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:14:38.519 15:01:54 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:14:38.519 15:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.519 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.519 [2024-04-18 15:01:54.125940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:14:38.519 [2024-04-18 15:01:54.125992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.519 [2024-04-18 15:01:54.126012] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x65eb10 00:14:38.519 [2024-04-18 15:01:54.126021] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.519 [2024-04-18 15:01:54.127822] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.519 [2024-04-18 15:01:54.127859] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:14:38.519 Passthru0 00:14:38.519 15:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.519 15:01:54 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:14:38.519 15:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.519 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.519 15:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.519 15:01:54 -- rpc/rpc.sh@20 -- # bdevs='[ 00:14:38.519 { 00:14:38.519 "aliases": [ 00:14:38.519 "87b2a34e-eec6-4b44-8c7a-f2f3a2c279cc" 00:14:38.519 ], 00:14:38.519 "assigned_rate_limits": { 00:14:38.519 "r_mbytes_per_sec": 0, 00:14:38.519 "rw_ios_per_sec": 0, 00:14:38.519 "rw_mbytes_per_sec": 0, 00:14:38.519 "w_mbytes_per_sec": 0 00:14:38.519 }, 00:14:38.519 "block_size": 512, 00:14:38.519 "claim_type": "exclusive_write", 00:14:38.519 "claimed": true, 00:14:38.519 "driver_specific": {}, 00:14:38.519 "memory_domains": [ 00:14:38.519 { 00:14:38.519 "dma_device_id": "system", 00:14:38.519 "dma_device_type": 1 00:14:38.519 }, 00:14:38.519 { 00:14:38.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.519 "dma_device_type": 2 00:14:38.519 } 00:14:38.519 ], 00:14:38.519 "name": "Malloc0", 00:14:38.519 "num_blocks": 16384, 00:14:38.519 "product_name": "Malloc disk", 00:14:38.519 "supported_io_types": { 00:14:38.519 "abort": true, 00:14:38.519 "compare": false, 00:14:38.519 "compare_and_write": false, 00:14:38.519 "flush": true, 00:14:38.519 "nvme_admin": false, 00:14:38.519 "nvme_io": false, 00:14:38.519 "read": true, 00:14:38.519 "reset": true, 00:14:38.519 "unmap": true, 00:14:38.519 "write": true, 00:14:38.519 "write_zeroes": true 00:14:38.519 }, 00:14:38.519 "uuid": "87b2a34e-eec6-4b44-8c7a-f2f3a2c279cc", 00:14:38.519 "zoned": false 00:14:38.519 }, 00:14:38.519 { 00:14:38.519 "aliases": [ 00:14:38.519 "ffc38076-9777-5f1e-9bb0-151902619c48" 00:14:38.519 ], 00:14:38.519 "assigned_rate_limits": { 00:14:38.519 "r_mbytes_per_sec": 0, 00:14:38.519 "rw_ios_per_sec": 0, 00:14:38.519 "rw_mbytes_per_sec": 0, 00:14:38.519 "w_mbytes_per_sec": 0 00:14:38.519 }, 00:14:38.519 "block_size": 512, 00:14:38.519 "claimed": false, 00:14:38.519 "driver_specific": { 00:14:38.519 "passthru": { 00:14:38.519 "base_bdev_name": "Malloc0", 00:14:38.519 "name": "Passthru0" 00:14:38.519 } 00:14:38.519 }, 00:14:38.519 "memory_domains": [ 00:14:38.519 { 00:14:38.519 "dma_device_id": "system", 00:14:38.519 "dma_device_type": 1 00:14:38.519 }, 00:14:38.519 { 00:14:38.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.519 "dma_device_type": 2 00:14:38.519 } 00:14:38.519 ], 00:14:38.519 "name": "Passthru0", 00:14:38.519 "num_blocks": 16384, 00:14:38.519 "product_name": "passthru", 00:14:38.519 "supported_io_types": { 00:14:38.519 "abort": true, 00:14:38.519 "compare": false, 00:14:38.519 "compare_and_write": false, 00:14:38.519 "flush": true, 00:14:38.519 "nvme_admin": false, 00:14:38.519 "nvme_io": false, 00:14:38.519 "read": true, 00:14:38.519 "reset": true, 00:14:38.519 "unmap": true, 00:14:38.519 "write": true, 00:14:38.519 "write_zeroes": true 00:14:38.519 }, 00:14:38.519 "uuid": "ffc38076-9777-5f1e-9bb0-151902619c48", 00:14:38.519 "zoned": false 00:14:38.519 } 00:14:38.519 ]' 00:14:38.519 15:01:54 -- rpc/rpc.sh@21 -- # jq length 00:14:38.519 15:01:54 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:14:38.519 15:01:54 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:14:38.519 15:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.519 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.519 15:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.519 15:01:54 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:38.519 15:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.519 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.519 15:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.796 15:01:54 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:38.797 15:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.797 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.797 15:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.797 15:01:54 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:14:38.797 15:01:54 -- rpc/rpc.sh@26 -- # jq length 00:14:38.797 ************************************ 00:14:38.797 END TEST rpc_integrity 00:14:38.797 ************************************ 00:14:38.797 15:01:54 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:14:38.797 00:14:38.797 real 0m0.288s 00:14:38.797 user 0m0.158s 00:14:38.797 sys 0m0.049s 00:14:38.797 15:01:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:38.797 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.797 15:01:54 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:14:38.797 15:01:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:38.797 15:01:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:38.797 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.797 ************************************ 00:14:38.797 START TEST rpc_plugins 00:14:38.797 ************************************ 00:14:38.797 15:01:54 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:14:38.797 15:01:54 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:14:38.797 15:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.797 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.797 15:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.797 15:01:54 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:14:38.797 15:01:54 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:14:38.797 15:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.797 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.797 15:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.797 15:01:54 -- rpc/rpc.sh@31 -- # bdevs='[ 00:14:38.797 { 00:14:38.797 "aliases": [ 00:14:38.797 "d7c1d34e-d224-4dac-bcbe-5dd963b2135b" 00:14:38.797 ], 00:14:38.797 "assigned_rate_limits": { 00:14:38.797 "r_mbytes_per_sec": 0, 00:14:38.797 "rw_ios_per_sec": 0, 00:14:38.797 "rw_mbytes_per_sec": 0, 00:14:38.797 "w_mbytes_per_sec": 0 00:14:38.797 }, 00:14:38.797 "block_size": 4096, 00:14:38.797 "claimed": false, 00:14:38.797 "driver_specific": {}, 00:14:38.797 "memory_domains": [ 00:14:38.797 { 00:14:38.797 "dma_device_id": "system", 00:14:38.797 "dma_device_type": 1 00:14:38.797 }, 00:14:38.797 { 00:14:38.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.797 "dma_device_type": 2 00:14:38.797 } 00:14:38.797 ], 00:14:38.797 "name": "Malloc1", 00:14:38.797 "num_blocks": 256, 00:14:38.797 "product_name": "Malloc disk", 00:14:38.797 "supported_io_types": { 00:14:38.797 "abort": true, 00:14:38.797 "compare": false, 00:14:38.797 "compare_and_write": false, 00:14:38.797 "flush": true, 00:14:38.797 "nvme_admin": false, 00:14:38.797 "nvme_io": false, 00:14:38.797 "read": true, 00:14:38.797 "reset": true, 00:14:38.797 "unmap": true, 00:14:38.797 "write": true, 00:14:38.797 "write_zeroes": true 00:14:38.797 }, 00:14:38.797 "uuid": "d7c1d34e-d224-4dac-bcbe-5dd963b2135b", 00:14:38.797 "zoned": false 00:14:38.797 } 00:14:38.797 ]' 00:14:38.797 15:01:54 -- rpc/rpc.sh@32 -- # jq length 00:14:39.062 15:01:54 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:14:39.062 15:01:54 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:14:39.062 15:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.062 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:39.062 15:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.062 15:01:54 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:14:39.062 15:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.062 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:39.062 15:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.062 15:01:54 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:14:39.062 15:01:54 -- rpc/rpc.sh@36 -- # jq length 00:14:39.062 ************************************ 00:14:39.062 END TEST rpc_plugins 00:14:39.062 ************************************ 00:14:39.062 15:01:54 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:14:39.062 00:14:39.062 real 0m0.144s 00:14:39.062 user 0m0.074s 00:14:39.062 sys 0m0.029s 00:14:39.062 15:01:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:39.062 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:39.062 15:01:54 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:14:39.062 15:01:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:39.062 15:01:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:39.062 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:39.062 ************************************ 00:14:39.062 START TEST rpc_trace_cmd_test 00:14:39.062 ************************************ 00:14:39.062 15:01:54 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:14:39.062 15:01:54 -- rpc/rpc.sh@40 -- # local info 00:14:39.062 15:01:54 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:14:39.062 15:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.062 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:39.062 15:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.062 15:01:54 -- rpc/rpc.sh@42 -- # info='{ 00:14:39.062 "bdev": { 00:14:39.062 "mask": "0x8", 00:14:39.062 "tpoint_mask": "0xffffffffffffffff" 00:14:39.062 }, 00:14:39.062 "bdev_nvme": { 00:14:39.062 "mask": "0x4000", 00:14:39.062 "tpoint_mask": "0x0" 00:14:39.062 }, 00:14:39.062 "blobfs": { 00:14:39.062 "mask": "0x80", 00:14:39.062 "tpoint_mask": "0x0" 00:14:39.062 }, 00:14:39.062 "dsa": { 00:14:39.062 "mask": "0x200", 00:14:39.062 "tpoint_mask": "0x0" 00:14:39.062 }, 00:14:39.062 "ftl": { 00:14:39.062 "mask": "0x40", 00:14:39.062 "tpoint_mask": "0x0" 00:14:39.062 }, 00:14:39.062 "iaa": { 00:14:39.062 "mask": "0x1000", 00:14:39.062 "tpoint_mask": "0x0" 00:14:39.062 }, 00:14:39.062 "iscsi_conn": { 00:14:39.062 "mask": "0x2", 00:14:39.062 "tpoint_mask": "0x0" 00:14:39.062 }, 00:14:39.062 "nvme_pcie": { 00:14:39.062 "mask": "0x800", 00:14:39.062 "tpoint_mask": "0x0" 00:14:39.062 }, 00:14:39.062 "nvme_tcp": { 00:14:39.062 "mask": "0x2000", 00:14:39.062 "tpoint_mask": "0x0" 00:14:39.062 }, 00:14:39.062 "nvmf_rdma": { 00:14:39.062 "mask": "0x10", 00:14:39.062 "tpoint_mask": "0x0" 00:14:39.062 }, 00:14:39.062 "nvmf_tcp": { 00:14:39.062 "mask": "0x20", 00:14:39.062 "tpoint_mask": "0x0" 00:14:39.062 }, 00:14:39.062 "scsi": { 00:14:39.062 "mask": "0x4", 00:14:39.062 "tpoint_mask": "0x0" 00:14:39.062 }, 00:14:39.062 "sock": { 00:14:39.062 "mask": "0x8000", 00:14:39.062 "tpoint_mask": "0x0" 00:14:39.062 }, 00:14:39.062 "thread": { 00:14:39.062 "mask": "0x400", 00:14:39.062 "tpoint_mask": "0x0" 00:14:39.062 }, 00:14:39.062 "tpoint_group_mask": "0x8", 00:14:39.062 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60056" 00:14:39.062 }' 00:14:39.062 15:01:54 -- rpc/rpc.sh@43 -- # jq length 00:14:39.322 15:01:54 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:14:39.322 15:01:54 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:14:39.322 15:01:54 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:14:39.322 15:01:54 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:14:39.322 15:01:54 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:14:39.322 15:01:54 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:14:39.322 15:01:54 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:14:39.322 15:01:54 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:14:39.322 ************************************ 00:14:39.322 END TEST rpc_trace_cmd_test 00:14:39.322 ************************************ 00:14:39.322 15:01:54 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:14:39.322 00:14:39.322 real 0m0.228s 00:14:39.322 user 0m0.175s 00:14:39.322 sys 0m0.044s 00:14:39.322 15:01:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:39.322 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:39.322 15:01:54 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:14:39.322 15:01:54 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:14:39.322 15:01:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:39.322 15:01:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:39.322 15:01:54 -- common/autotest_common.sh@10 -- # set +x 00:14:39.580 ************************************ 00:14:39.580 START TEST go_rpc 00:14:39.580 ************************************ 00:14:39.580 15:01:55 -- common/autotest_common.sh@1111 -- # go_rpc 00:14:39.580 15:01:55 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:14:39.580 15:01:55 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:14:39.580 15:01:55 -- rpc/rpc.sh@52 -- # jq length 00:14:39.580 15:01:55 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:14:39.580 15:01:55 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:14:39.580 15:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.581 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:39.581 15:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.581 15:01:55 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:14:39.581 15:01:55 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:14:39.581 15:01:55 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["14fe3f29-4a4d-4de5-bdc4-1cfc3d54211f"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"14fe3f29-4a4d-4de5-bdc4-1cfc3d54211f","zoned":false}]' 00:14:39.581 15:01:55 -- rpc/rpc.sh@57 -- # jq length 00:14:39.581 15:01:55 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:14:39.581 15:01:55 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:39.581 15:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.581 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:39.581 15:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.581 15:01:55 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:14:39.581 15:01:55 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:14:39.581 15:01:55 -- rpc/rpc.sh@61 -- # jq length 00:14:39.840 ************************************ 00:14:39.840 END TEST go_rpc 00:14:39.840 ************************************ 00:14:39.840 15:01:55 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:14:39.840 00:14:39.840 real 0m0.207s 00:14:39.840 user 0m0.127s 00:14:39.840 sys 0m0.047s 00:14:39.840 15:01:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:39.840 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:39.840 15:01:55 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:14:39.840 15:01:55 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:14:39.840 15:01:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:39.840 15:01:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:39.840 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:39.840 ************************************ 00:14:39.840 START TEST rpc_daemon_integrity 00:14:39.840 ************************************ 00:14:39.840 15:01:55 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:14:39.840 15:01:55 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:39.840 15:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.840 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:39.840 15:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.840 15:01:55 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:14:39.840 15:01:55 -- rpc/rpc.sh@13 -- # jq length 00:14:39.840 15:01:55 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:14:39.840 15:01:55 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:14:39.840 15:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.840 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:39.840 15:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.840 15:01:55 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:14:39.840 15:01:55 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:14:39.840 15:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.840 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:39.840 15:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.840 15:01:55 -- rpc/rpc.sh@16 -- # bdevs='[ 00:14:39.840 { 00:14:39.840 "aliases": [ 00:14:39.840 "644031db-d59a-491c-bff9-5f1568ca9ff3" 00:14:39.840 ], 00:14:39.840 "assigned_rate_limits": { 00:14:39.840 "r_mbytes_per_sec": 0, 00:14:39.840 "rw_ios_per_sec": 0, 00:14:39.840 "rw_mbytes_per_sec": 0, 00:14:39.840 "w_mbytes_per_sec": 0 00:14:39.840 }, 00:14:39.840 "block_size": 512, 00:14:39.840 "claimed": false, 00:14:39.840 "driver_specific": {}, 00:14:39.840 "memory_domains": [ 00:14:39.840 { 00:14:39.840 "dma_device_id": "system", 00:14:39.840 "dma_device_type": 1 00:14:39.840 }, 00:14:39.840 { 00:14:39.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.840 "dma_device_type": 2 00:14:39.840 } 00:14:39.840 ], 00:14:39.840 "name": "Malloc3", 00:14:39.840 "num_blocks": 16384, 00:14:39.840 "product_name": "Malloc disk", 00:14:39.840 "supported_io_types": { 00:14:39.840 "abort": true, 00:14:39.840 "compare": false, 00:14:39.840 "compare_and_write": false, 00:14:39.840 "flush": true, 00:14:39.840 "nvme_admin": false, 00:14:39.840 "nvme_io": false, 00:14:39.840 "read": true, 00:14:39.840 "reset": true, 00:14:39.840 "unmap": true, 00:14:39.840 "write": true, 00:14:39.840 "write_zeroes": true 00:14:39.840 }, 00:14:39.840 "uuid": "644031db-d59a-491c-bff9-5f1568ca9ff3", 00:14:39.840 "zoned": false 00:14:39.840 } 00:14:39.840 ]' 00:14:39.840 15:01:55 -- rpc/rpc.sh@17 -- # jq length 00:14:40.100 15:01:55 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:14:40.100 15:01:55 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:14:40.100 15:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:40.100 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:40.100 [2024-04-18 15:01:55.556137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:40.100 [2024-04-18 15:01:55.556184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.100 [2024-04-18 15:01:55.556201] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8468a0 00:14:40.100 [2024-04-18 15:01:55.556209] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.100 [2024-04-18 15:01:55.557661] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.100 [2024-04-18 15:01:55.557696] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:14:40.100 Passthru0 00:14:40.100 15:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:40.100 15:01:55 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:14:40.100 15:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:40.100 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:40.100 15:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:40.100 15:01:55 -- rpc/rpc.sh@20 -- # bdevs='[ 00:14:40.100 { 00:14:40.100 "aliases": [ 00:14:40.100 "644031db-d59a-491c-bff9-5f1568ca9ff3" 00:14:40.100 ], 00:14:40.100 "assigned_rate_limits": { 00:14:40.100 "r_mbytes_per_sec": 0, 00:14:40.100 "rw_ios_per_sec": 0, 00:14:40.100 "rw_mbytes_per_sec": 0, 00:14:40.100 "w_mbytes_per_sec": 0 00:14:40.100 }, 00:14:40.100 "block_size": 512, 00:14:40.100 "claim_type": "exclusive_write", 00:14:40.100 "claimed": true, 00:14:40.100 "driver_specific": {}, 00:14:40.100 "memory_domains": [ 00:14:40.100 { 00:14:40.100 "dma_device_id": "system", 00:14:40.100 "dma_device_type": 1 00:14:40.100 }, 00:14:40.100 { 00:14:40.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.100 "dma_device_type": 2 00:14:40.100 } 00:14:40.100 ], 00:14:40.100 "name": "Malloc3", 00:14:40.100 "num_blocks": 16384, 00:14:40.100 "product_name": "Malloc disk", 00:14:40.100 "supported_io_types": { 00:14:40.100 "abort": true, 00:14:40.100 "compare": false, 00:14:40.100 "compare_and_write": false, 00:14:40.100 "flush": true, 00:14:40.100 "nvme_admin": false, 00:14:40.100 "nvme_io": false, 00:14:40.100 "read": true, 00:14:40.100 "reset": true, 00:14:40.100 "unmap": true, 00:14:40.100 "write": true, 00:14:40.100 "write_zeroes": true 00:14:40.100 }, 00:14:40.100 "uuid": "644031db-d59a-491c-bff9-5f1568ca9ff3", 00:14:40.100 "zoned": false 00:14:40.100 }, 00:14:40.100 { 00:14:40.100 "aliases": [ 00:14:40.100 "d9c4462a-4adf-5e04-8064-6039c56ba8d9" 00:14:40.100 ], 00:14:40.100 "assigned_rate_limits": { 00:14:40.100 "r_mbytes_per_sec": 0, 00:14:40.100 "rw_ios_per_sec": 0, 00:14:40.100 "rw_mbytes_per_sec": 0, 00:14:40.100 "w_mbytes_per_sec": 0 00:14:40.100 }, 00:14:40.100 "block_size": 512, 00:14:40.100 "claimed": false, 00:14:40.100 "driver_specific": { 00:14:40.100 "passthru": { 00:14:40.100 "base_bdev_name": "Malloc3", 00:14:40.100 "name": "Passthru0" 00:14:40.100 } 00:14:40.100 }, 00:14:40.100 "memory_domains": [ 00:14:40.100 { 00:14:40.100 "dma_device_id": "system", 00:14:40.100 "dma_device_type": 1 00:14:40.100 }, 00:14:40.100 { 00:14:40.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.100 "dma_device_type": 2 00:14:40.100 } 00:14:40.100 ], 00:14:40.100 "name": "Passthru0", 00:14:40.100 "num_blocks": 16384, 00:14:40.100 "product_name": "passthru", 00:14:40.100 "supported_io_types": { 00:14:40.100 "abort": true, 00:14:40.100 "compare": false, 00:14:40.100 "compare_and_write": false, 00:14:40.100 "flush": true, 00:14:40.100 "nvme_admin": false, 00:14:40.100 "nvme_io": false, 00:14:40.100 "read": true, 00:14:40.100 "reset": true, 00:14:40.100 "unmap": true, 00:14:40.100 "write": true, 00:14:40.100 "write_zeroes": true 00:14:40.100 }, 00:14:40.100 "uuid": "d9c4462a-4adf-5e04-8064-6039c56ba8d9", 00:14:40.100 "zoned": false 00:14:40.100 } 00:14:40.100 ]' 00:14:40.100 15:01:55 -- rpc/rpc.sh@21 -- # jq length 00:14:40.100 15:01:55 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:14:40.100 15:01:55 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:14:40.100 15:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:40.100 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:40.100 15:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:40.100 15:01:55 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:40.100 15:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:40.100 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:40.100 15:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:40.100 15:01:55 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:40.100 15:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:40.100 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:40.100 15:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:40.100 15:01:55 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:14:40.100 15:01:55 -- rpc/rpc.sh@26 -- # jq length 00:14:40.100 ************************************ 00:14:40.100 END TEST rpc_daemon_integrity 00:14:40.100 ************************************ 00:14:40.100 15:01:55 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:14:40.100 00:14:40.100 real 0m0.281s 00:14:40.100 user 0m0.160s 00:14:40.100 sys 0m0.044s 00:14:40.100 15:01:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:40.100 15:01:55 -- common/autotest_common.sh@10 -- # set +x 00:14:40.100 15:01:55 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:40.100 15:01:55 -- rpc/rpc.sh@84 -- # killprocess 60056 00:14:40.100 15:01:55 -- common/autotest_common.sh@936 -- # '[' -z 60056 ']' 00:14:40.100 15:01:55 -- common/autotest_common.sh@940 -- # kill -0 60056 00:14:40.100 15:01:55 -- common/autotest_common.sh@941 -- # uname 00:14:40.100 15:01:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:40.100 15:01:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60056 00:14:40.100 killing process with pid 60056 00:14:40.100 15:01:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:40.100 15:01:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:40.100 15:01:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60056' 00:14:40.100 15:01:55 -- common/autotest_common.sh@955 -- # kill 60056 00:14:40.100 15:01:55 -- common/autotest_common.sh@960 -- # wait 60056 00:14:40.668 00:14:40.668 real 0m3.344s 00:14:40.668 user 0m4.141s 00:14:40.668 sys 0m1.064s 00:14:40.668 ************************************ 00:14:40.668 END TEST rpc 00:14:40.668 ************************************ 00:14:40.668 15:01:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:40.668 15:01:56 -- common/autotest_common.sh@10 -- # set +x 00:14:40.668 15:01:56 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:14:40.668 15:01:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:40.668 15:01:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:40.668 15:01:56 -- common/autotest_common.sh@10 -- # set +x 00:14:40.668 ************************************ 00:14:40.668 START TEST skip_rpc 00:14:40.668 ************************************ 00:14:40.668 15:01:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:14:40.927 * Looking for test storage... 00:14:40.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:14:40.927 15:01:56 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:40.927 15:01:56 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:14:40.927 15:01:56 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:14:40.927 15:01:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:40.927 15:01:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:40.927 15:01:56 -- common/autotest_common.sh@10 -- # set +x 00:14:40.927 ************************************ 00:14:40.927 START TEST skip_rpc 00:14:40.927 ************************************ 00:14:40.927 15:01:56 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:14:40.927 15:01:56 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60354 00:14:40.927 15:01:56 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:40.927 15:01:56 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:14:40.927 15:01:56 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:14:40.927 [2024-04-18 15:01:56.577155] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:40.927 [2024-04-18 15:01:56.577233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60354 ] 00:14:41.187 [2024-04-18 15:01:56.716705] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.187 [2024-04-18 15:01:56.805475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.462 15:02:01 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:14:46.462 15:02:01 -- common/autotest_common.sh@638 -- # local es=0 00:14:46.462 15:02:01 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:14:46.462 15:02:01 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:14:46.462 15:02:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:46.462 15:02:01 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:14:46.462 15:02:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:46.462 15:02:01 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:14:46.462 15:02:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.462 15:02:01 -- common/autotest_common.sh@10 -- # set +x 00:14:46.462 2024/04/18 15:02:01 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:14:46.462 15:02:01 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:14:46.462 15:02:01 -- common/autotest_common.sh@641 -- # es=1 00:14:46.462 15:02:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:46.462 15:02:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:46.462 15:02:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:46.462 15:02:01 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:14:46.462 15:02:01 -- rpc/skip_rpc.sh@23 -- # killprocess 60354 00:14:46.462 15:02:01 -- common/autotest_common.sh@936 -- # '[' -z 60354 ']' 00:14:46.462 15:02:01 -- common/autotest_common.sh@940 -- # kill -0 60354 00:14:46.462 15:02:01 -- common/autotest_common.sh@941 -- # uname 00:14:46.462 15:02:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.462 15:02:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60354 00:14:46.462 killing process with pid 60354 00:14:46.462 15:02:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:46.462 15:02:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:46.462 15:02:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60354' 00:14:46.462 15:02:01 -- common/autotest_common.sh@955 -- # kill 60354 00:14:46.462 15:02:01 -- common/autotest_common.sh@960 -- # wait 60354 00:14:46.462 ************************************ 00:14:46.462 END TEST skip_rpc 00:14:46.462 ************************************ 00:14:46.462 00:14:46.462 real 0m5.454s 00:14:46.462 user 0m5.085s 00:14:46.462 sys 0m0.268s 00:14:46.462 15:02:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:46.462 15:02:01 -- common/autotest_common.sh@10 -- # set +x 00:14:46.462 15:02:02 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:14:46.462 15:02:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:46.462 15:02:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.462 15:02:02 -- common/autotest_common.sh@10 -- # set +x 00:14:46.462 ************************************ 00:14:46.462 START TEST skip_rpc_with_json 00:14:46.462 ************************************ 00:14:46.462 15:02:02 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:14:46.462 15:02:02 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:14:46.462 15:02:02 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60445 00:14:46.462 15:02:02 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:46.462 15:02:02 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:46.462 15:02:02 -- rpc/skip_rpc.sh@31 -- # waitforlisten 60445 00:14:46.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.462 15:02:02 -- common/autotest_common.sh@817 -- # '[' -z 60445 ']' 00:14:46.462 15:02:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.462 15:02:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:46.462 15:02:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.462 15:02:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:46.462 15:02:02 -- common/autotest_common.sh@10 -- # set +x 00:14:46.720 [2024-04-18 15:02:02.186440] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:46.721 [2024-04-18 15:02:02.186524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60445 ] 00:14:46.721 [2024-04-18 15:02:02.329958] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.721 [2024-04-18 15:02:02.417190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.668 15:02:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:47.668 15:02:03 -- common/autotest_common.sh@850 -- # return 0 00:14:47.668 15:02:03 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:14:47.668 15:02:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:47.668 15:02:03 -- common/autotest_common.sh@10 -- # set +x 00:14:47.668 [2024-04-18 15:02:03.062380] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:14:47.668 2024/04/18 15:02:03 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:14:47.668 request: 00:14:47.668 { 00:14:47.668 "method": "nvmf_get_transports", 00:14:47.668 "params": { 00:14:47.668 "trtype": "tcp" 00:14:47.668 } 00:14:47.668 } 00:14:47.668 Got JSON-RPC error response 00:14:47.668 GoRPCClient: error on JSON-RPC call 00:14:47.668 15:02:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:14:47.668 15:02:03 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:14:47.668 15:02:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:47.668 15:02:03 -- common/autotest_common.sh@10 -- # set +x 00:14:47.668 [2024-04-18 15:02:03.078419] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.668 15:02:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:47.668 15:02:03 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:14:47.668 15:02:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:47.668 15:02:03 -- common/autotest_common.sh@10 -- # set +x 00:14:47.668 15:02:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:47.668 15:02:03 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:47.668 { 00:14:47.668 "subsystems": [ 00:14:47.668 { 00:14:47.668 "subsystem": "keyring", 00:14:47.668 "config": [] 00:14:47.668 }, 00:14:47.668 { 00:14:47.668 "subsystem": "iobuf", 00:14:47.668 "config": [ 00:14:47.668 { 00:14:47.668 "method": "iobuf_set_options", 00:14:47.668 "params": { 00:14:47.668 "large_bufsize": 135168, 00:14:47.668 "large_pool_count": 1024, 00:14:47.668 "small_bufsize": 8192, 00:14:47.668 "small_pool_count": 8192 00:14:47.668 } 00:14:47.668 } 00:14:47.668 ] 00:14:47.668 }, 00:14:47.668 { 00:14:47.668 "subsystem": "sock", 00:14:47.668 "config": [ 00:14:47.668 { 00:14:47.668 "method": "sock_impl_set_options", 00:14:47.668 "params": { 00:14:47.668 "enable_ktls": false, 00:14:47.668 "enable_placement_id": 0, 00:14:47.668 "enable_quickack": false, 00:14:47.668 "enable_recv_pipe": true, 00:14:47.668 "enable_zerocopy_send_client": false, 00:14:47.668 "enable_zerocopy_send_server": true, 00:14:47.668 "impl_name": "posix", 00:14:47.668 "recv_buf_size": 2097152, 00:14:47.668 "send_buf_size": 2097152, 00:14:47.668 "tls_version": 0, 00:14:47.668 "zerocopy_threshold": 0 00:14:47.668 } 00:14:47.668 }, 00:14:47.668 { 00:14:47.668 "method": "sock_impl_set_options", 00:14:47.668 "params": { 00:14:47.668 "enable_ktls": false, 00:14:47.668 "enable_placement_id": 0, 00:14:47.668 "enable_quickack": false, 00:14:47.668 "enable_recv_pipe": true, 00:14:47.668 "enable_zerocopy_send_client": false, 00:14:47.668 "enable_zerocopy_send_server": true, 00:14:47.668 "impl_name": "ssl", 00:14:47.668 "recv_buf_size": 4096, 00:14:47.668 "send_buf_size": 4096, 00:14:47.668 "tls_version": 0, 00:14:47.668 "zerocopy_threshold": 0 00:14:47.668 } 00:14:47.668 } 00:14:47.668 ] 00:14:47.668 }, 00:14:47.668 { 00:14:47.668 "subsystem": "vmd", 00:14:47.668 "config": [] 00:14:47.668 }, 00:14:47.668 { 00:14:47.668 "subsystem": "accel", 00:14:47.668 "config": [ 00:14:47.668 { 00:14:47.668 "method": "accel_set_options", 00:14:47.668 "params": { 00:14:47.668 "buf_count": 2048, 00:14:47.668 "large_cache_size": 16, 00:14:47.668 "sequence_count": 2048, 00:14:47.668 "small_cache_size": 128, 00:14:47.668 "task_count": 2048 00:14:47.668 } 00:14:47.668 } 00:14:47.668 ] 00:14:47.668 }, 00:14:47.668 { 00:14:47.668 "subsystem": "bdev", 00:14:47.668 "config": [ 00:14:47.668 { 00:14:47.668 "method": "bdev_set_options", 00:14:47.668 "params": { 00:14:47.668 "bdev_auto_examine": true, 00:14:47.668 "bdev_io_cache_size": 256, 00:14:47.668 "bdev_io_pool_size": 65535, 00:14:47.668 "iobuf_large_cache_size": 16, 00:14:47.668 "iobuf_small_cache_size": 128 00:14:47.668 } 00:14:47.668 }, 00:14:47.668 { 00:14:47.668 "method": "bdev_raid_set_options", 00:14:47.668 "params": { 00:14:47.668 "process_window_size_kb": 1024 00:14:47.668 } 00:14:47.668 }, 00:14:47.668 { 00:14:47.668 "method": "bdev_iscsi_set_options", 00:14:47.668 "params": { 00:14:47.668 "timeout_sec": 30 00:14:47.668 } 00:14:47.668 }, 00:14:47.668 { 00:14:47.668 "method": "bdev_nvme_set_options", 00:14:47.668 "params": { 00:14:47.668 "action_on_timeout": "none", 00:14:47.668 "allow_accel_sequence": false, 00:14:47.668 "arbitration_burst": 0, 00:14:47.668 "bdev_retry_count": 3, 00:14:47.668 "ctrlr_loss_timeout_sec": 0, 00:14:47.668 "delay_cmd_submit": true, 00:14:47.668 "dhchap_dhgroups": [ 00:14:47.668 "null", 00:14:47.668 "ffdhe2048", 00:14:47.668 "ffdhe3072", 00:14:47.668 "ffdhe4096", 00:14:47.668 "ffdhe6144", 00:14:47.668 "ffdhe8192" 00:14:47.668 ], 00:14:47.668 "dhchap_digests": [ 00:14:47.668 "sha256", 00:14:47.668 "sha384", 00:14:47.668 "sha512" 00:14:47.668 ], 00:14:47.668 "disable_auto_failback": false, 00:14:47.668 "fast_io_fail_timeout_sec": 0, 00:14:47.668 "generate_uuids": false, 00:14:47.668 "high_priority_weight": 0, 00:14:47.668 "io_path_stat": false, 00:14:47.668 "io_queue_requests": 0, 00:14:47.668 "keep_alive_timeout_ms": 10000, 00:14:47.668 "low_priority_weight": 0, 00:14:47.668 "medium_priority_weight": 0, 00:14:47.668 "nvme_adminq_poll_period_us": 10000, 00:14:47.668 "nvme_error_stat": false, 00:14:47.668 "nvme_ioq_poll_period_us": 0, 00:14:47.668 "rdma_cm_event_timeout_ms": 0, 00:14:47.668 "rdma_max_cq_size": 0, 00:14:47.668 "rdma_srq_size": 0, 00:14:47.668 "reconnect_delay_sec": 0, 00:14:47.668 "timeout_admin_us": 0, 00:14:47.668 "timeout_us": 0, 00:14:47.668 "transport_ack_timeout": 0, 00:14:47.668 "transport_retry_count": 4, 00:14:47.668 "transport_tos": 0 00:14:47.668 } 00:14:47.668 }, 00:14:47.668 { 00:14:47.668 "method": "bdev_nvme_set_hotplug", 00:14:47.668 "params": { 00:14:47.668 "enable": false, 00:14:47.668 "period_us": 100000 00:14:47.668 } 00:14:47.668 }, 00:14:47.668 { 00:14:47.669 "method": "bdev_wait_for_examine" 00:14:47.669 } 00:14:47.669 ] 00:14:47.669 }, 00:14:47.669 { 00:14:47.669 "subsystem": "scsi", 00:14:47.669 "config": null 00:14:47.669 }, 00:14:47.669 { 00:14:47.669 "subsystem": "scheduler", 00:14:47.669 "config": [ 00:14:47.669 { 00:14:47.669 "method": "framework_set_scheduler", 00:14:47.669 "params": { 00:14:47.669 "name": "static" 00:14:47.669 } 00:14:47.669 } 00:14:47.669 ] 00:14:47.669 }, 00:14:47.669 { 00:14:47.669 "subsystem": "vhost_scsi", 00:14:47.669 "config": [] 00:14:47.669 }, 00:14:47.669 { 00:14:47.669 "subsystem": "vhost_blk", 00:14:47.669 "config": [] 00:14:47.669 }, 00:14:47.669 { 00:14:47.669 "subsystem": "ublk", 00:14:47.669 "config": [] 00:14:47.669 }, 00:14:47.669 { 00:14:47.669 "subsystem": "nbd", 00:14:47.669 "config": [] 00:14:47.669 }, 00:14:47.669 { 00:14:47.669 "subsystem": "nvmf", 00:14:47.669 "config": [ 00:14:47.669 { 00:14:47.669 "method": "nvmf_set_config", 00:14:47.669 "params": { 00:14:47.669 "admin_cmd_passthru": { 00:14:47.669 "identify_ctrlr": false 00:14:47.669 }, 00:14:47.669 "discovery_filter": "match_any" 00:14:47.669 } 00:14:47.669 }, 00:14:47.669 { 00:14:47.669 "method": "nvmf_set_max_subsystems", 00:14:47.669 "params": { 00:14:47.669 "max_subsystems": 1024 00:14:47.669 } 00:14:47.669 }, 00:14:47.669 { 00:14:47.669 "method": "nvmf_set_crdt", 00:14:47.669 "params": { 00:14:47.669 "crdt1": 0, 00:14:47.669 "crdt2": 0, 00:14:47.669 "crdt3": 0 00:14:47.669 } 00:14:47.669 }, 00:14:47.669 { 00:14:47.669 "method": "nvmf_create_transport", 00:14:47.669 "params": { 00:14:47.669 "abort_timeout_sec": 1, 00:14:47.669 "ack_timeout": 0, 00:14:47.669 "buf_cache_size": 4294967295, 00:14:47.669 "c2h_success": true, 00:14:47.669 "dif_insert_or_strip": false, 00:14:47.669 "in_capsule_data_size": 4096, 00:14:47.669 "io_unit_size": 131072, 00:14:47.669 "max_aq_depth": 128, 00:14:47.669 "max_io_qpairs_per_ctrlr": 127, 00:14:47.669 "max_io_size": 131072, 00:14:47.669 "max_queue_depth": 128, 00:14:47.669 "num_shared_buffers": 511, 00:14:47.669 "sock_priority": 0, 00:14:47.669 "trtype": "TCP", 00:14:47.669 "zcopy": false 00:14:47.669 } 00:14:47.669 } 00:14:47.669 ] 00:14:47.669 }, 00:14:47.669 { 00:14:47.669 "subsystem": "iscsi", 00:14:47.669 "config": [ 00:14:47.669 { 00:14:47.669 "method": "iscsi_set_options", 00:14:47.669 "params": { 00:14:47.669 "allow_duplicated_isid": false, 00:14:47.669 "chap_group": 0, 00:14:47.669 "data_out_pool_size": 2048, 00:14:47.669 "default_time2retain": 20, 00:14:47.669 "default_time2wait": 2, 00:14:47.669 "disable_chap": false, 00:14:47.669 "error_recovery_level": 0, 00:14:47.669 "first_burst_length": 8192, 00:14:47.669 "immediate_data": true, 00:14:47.669 "immediate_data_pool_size": 16384, 00:14:47.669 "max_connections_per_session": 2, 00:14:47.669 "max_large_datain_per_connection": 64, 00:14:47.669 "max_queue_depth": 64, 00:14:47.669 "max_r2t_per_connection": 4, 00:14:47.669 "max_sessions": 128, 00:14:47.669 "mutual_chap": false, 00:14:47.669 "node_base": "iqn.2016-06.io.spdk", 00:14:47.669 "nop_in_interval": 30, 00:14:47.669 "nop_timeout": 60, 00:14:47.669 "pdu_pool_size": 36864, 00:14:47.669 "require_chap": false 00:14:47.669 } 00:14:47.669 } 00:14:47.669 ] 00:14:47.669 } 00:14:47.669 ] 00:14:47.669 } 00:14:47.669 15:02:03 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:47.669 15:02:03 -- rpc/skip_rpc.sh@40 -- # killprocess 60445 00:14:47.669 15:02:03 -- common/autotest_common.sh@936 -- # '[' -z 60445 ']' 00:14:47.669 15:02:03 -- common/autotest_common.sh@940 -- # kill -0 60445 00:14:47.669 15:02:03 -- common/autotest_common.sh@941 -- # uname 00:14:47.669 15:02:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:47.669 15:02:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60445 00:14:47.669 15:02:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:47.669 killing process with pid 60445 00:14:47.669 15:02:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:47.669 15:02:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60445' 00:14:47.669 15:02:03 -- common/autotest_common.sh@955 -- # kill 60445 00:14:47.669 15:02:03 -- common/autotest_common.sh@960 -- # wait 60445 00:14:48.282 15:02:03 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:48.282 15:02:03 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60490 00:14:48.282 15:02:03 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:14:53.549 15:02:08 -- rpc/skip_rpc.sh@50 -- # killprocess 60490 00:14:53.549 15:02:08 -- common/autotest_common.sh@936 -- # '[' -z 60490 ']' 00:14:53.549 15:02:08 -- common/autotest_common.sh@940 -- # kill -0 60490 00:14:53.549 15:02:08 -- common/autotest_common.sh@941 -- # uname 00:14:53.549 15:02:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:53.549 15:02:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60490 00:14:53.549 killing process with pid 60490 00:14:53.549 15:02:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:53.549 15:02:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:53.549 15:02:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60490' 00:14:53.549 15:02:08 -- common/autotest_common.sh@955 -- # kill 60490 00:14:53.549 15:02:08 -- common/autotest_common.sh@960 -- # wait 60490 00:14:53.549 15:02:09 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:14:53.549 15:02:09 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:14:53.549 ************************************ 00:14:53.549 END TEST skip_rpc_with_json 00:14:53.549 ************************************ 00:14:53.549 00:14:53.549 real 0m7.035s 00:14:53.549 user 0m6.663s 00:14:53.549 sys 0m0.681s 00:14:53.549 15:02:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:53.549 15:02:09 -- common/autotest_common.sh@10 -- # set +x 00:14:53.549 15:02:09 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:14:53.549 15:02:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:53.549 15:02:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.549 15:02:09 -- common/autotest_common.sh@10 -- # set +x 00:14:53.809 ************************************ 00:14:53.809 START TEST skip_rpc_with_delay 00:14:53.809 ************************************ 00:14:53.809 15:02:09 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:14:53.809 15:02:09 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:14:53.809 15:02:09 -- common/autotest_common.sh@638 -- # local es=0 00:14:53.809 15:02:09 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:14:53.809 15:02:09 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.809 15:02:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:53.809 15:02:09 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.809 15:02:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:53.809 15:02:09 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.809 15:02:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:53.809 15:02:09 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:53.809 15:02:09 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:14:53.809 15:02:09 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:14:53.809 [2024-04-18 15:02:09.380175] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:14:53.809 [2024-04-18 15:02:09.380300] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:14:53.809 15:02:09 -- common/autotest_common.sh@641 -- # es=1 00:14:53.809 15:02:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:53.809 15:02:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:53.809 15:02:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:53.809 00:14:53.809 real 0m0.083s 00:14:53.809 user 0m0.050s 00:14:53.809 sys 0m0.031s 00:14:53.809 ************************************ 00:14:53.809 END TEST skip_rpc_with_delay 00:14:53.809 ************************************ 00:14:53.809 15:02:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:53.809 15:02:09 -- common/autotest_common.sh@10 -- # set +x 00:14:53.809 15:02:09 -- rpc/skip_rpc.sh@77 -- # uname 00:14:53.809 15:02:09 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:14:53.809 15:02:09 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:14:53.809 15:02:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:53.809 15:02:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.809 15:02:09 -- common/autotest_common.sh@10 -- # set +x 00:14:54.069 ************************************ 00:14:54.069 START TEST exit_on_failed_rpc_init 00:14:54.069 ************************************ 00:14:54.069 15:02:09 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:14:54.069 15:02:09 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60608 00:14:54.069 15:02:09 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:54.069 15:02:09 -- rpc/skip_rpc.sh@63 -- # waitforlisten 60608 00:14:54.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.069 15:02:09 -- common/autotest_common.sh@817 -- # '[' -z 60608 ']' 00:14:54.069 15:02:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.069 15:02:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:54.069 15:02:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.069 15:02:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:54.069 15:02:09 -- common/autotest_common.sh@10 -- # set +x 00:14:54.069 [2024-04-18 15:02:09.620558] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:54.069 [2024-04-18 15:02:09.620654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60608 ] 00:14:54.069 [2024-04-18 15:02:09.761564] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.329 [2024-04-18 15:02:09.857890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.898 15:02:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:54.898 15:02:10 -- common/autotest_common.sh@850 -- # return 0 00:14:54.898 15:02:10 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:54.898 15:02:10 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:14:54.898 15:02:10 -- common/autotest_common.sh@638 -- # local es=0 00:14:54.898 15:02:10 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:14:54.898 15:02:10 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:54.898 15:02:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:54.898 15:02:10 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:54.898 15:02:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:54.898 15:02:10 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:54.898 15:02:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:54.898 15:02:10 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:54.898 15:02:10 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:14:54.898 15:02:10 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:14:54.898 [2024-04-18 15:02:10.555038] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:54.898 [2024-04-18 15:02:10.555372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60638 ] 00:14:55.157 [2024-04-18 15:02:10.697378] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.157 [2024-04-18 15:02:10.793710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.157 [2024-04-18 15:02:10.794226] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:14:55.157 [2024-04-18 15:02:10.794460] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:14:55.157 [2024-04-18 15:02:10.794690] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:55.422 15:02:10 -- common/autotest_common.sh@641 -- # es=234 00:14:55.422 15:02:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:55.422 15:02:10 -- common/autotest_common.sh@650 -- # es=106 00:14:55.422 15:02:10 -- common/autotest_common.sh@651 -- # case "$es" in 00:14:55.422 15:02:10 -- common/autotest_common.sh@658 -- # es=1 00:14:55.422 15:02:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:55.422 15:02:10 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:55.422 15:02:10 -- rpc/skip_rpc.sh@70 -- # killprocess 60608 00:14:55.422 15:02:10 -- common/autotest_common.sh@936 -- # '[' -z 60608 ']' 00:14:55.422 15:02:10 -- common/autotest_common.sh@940 -- # kill -0 60608 00:14:55.422 15:02:10 -- common/autotest_common.sh@941 -- # uname 00:14:55.422 15:02:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:55.422 15:02:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60608 00:14:55.422 killing process with pid 60608 00:14:55.422 15:02:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:55.422 15:02:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:55.422 15:02:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60608' 00:14:55.422 15:02:10 -- common/autotest_common.sh@955 -- # kill 60608 00:14:55.422 15:02:10 -- common/autotest_common.sh@960 -- # wait 60608 00:14:55.705 00:14:55.705 real 0m1.808s 00:14:55.705 user 0m2.020s 00:14:55.705 sys 0m0.445s 00:14:55.705 ************************************ 00:14:55.705 END TEST exit_on_failed_rpc_init 00:14:55.705 ************************************ 00:14:55.705 15:02:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:55.705 15:02:11 -- common/autotest_common.sh@10 -- # set +x 00:14:55.963 15:02:11 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:55.963 00:14:55.963 real 0m15.112s 00:14:55.963 user 0m14.067s 00:14:55.963 sys 0m1.847s 00:14:55.963 15:02:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:55.963 ************************************ 00:14:55.963 END TEST skip_rpc 00:14:55.963 ************************************ 00:14:55.963 15:02:11 -- common/autotest_common.sh@10 -- # set +x 00:14:55.963 15:02:11 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:14:55.963 15:02:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:55.963 15:02:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:55.963 15:02:11 -- common/autotest_common.sh@10 -- # set +x 00:14:55.963 ************************************ 00:14:55.963 START TEST rpc_client 00:14:55.963 ************************************ 00:14:55.963 15:02:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:14:56.223 * Looking for test storage... 00:14:56.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:14:56.223 15:02:11 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:14:56.223 OK 00:14:56.223 15:02:11 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:14:56.223 00:14:56.223 real 0m0.159s 00:14:56.223 user 0m0.064s 00:14:56.223 sys 0m0.104s 00:14:56.223 15:02:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:56.223 15:02:11 -- common/autotest_common.sh@10 -- # set +x 00:14:56.223 ************************************ 00:14:56.223 END TEST rpc_client 00:14:56.223 ************************************ 00:14:56.223 15:02:11 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:14:56.223 15:02:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:56.223 15:02:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.223 15:02:11 -- common/autotest_common.sh@10 -- # set +x 00:14:56.223 ************************************ 00:14:56.223 START TEST json_config 00:14:56.223 ************************************ 00:14:56.223 15:02:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:14:56.482 15:02:11 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.482 15:02:11 -- nvmf/common.sh@7 -- # uname -s 00:14:56.482 15:02:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.482 15:02:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.482 15:02:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.482 15:02:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.482 15:02:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.482 15:02:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.482 15:02:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.482 15:02:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.482 15:02:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.482 15:02:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.482 15:02:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:14:56.482 15:02:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:14:56.482 15:02:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.482 15:02:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.482 15:02:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:56.482 15:02:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.482 15:02:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.482 15:02:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.482 15:02:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.482 15:02:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.482 15:02:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.482 15:02:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.482 15:02:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.482 15:02:11 -- paths/export.sh@5 -- # export PATH 00:14:56.483 15:02:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.483 15:02:11 -- nvmf/common.sh@47 -- # : 0 00:14:56.483 15:02:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.483 15:02:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.483 15:02:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.483 15:02:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.483 15:02:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.483 15:02:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.483 15:02:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.483 15:02:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.483 15:02:11 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:14:56.483 15:02:11 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:14:56.483 15:02:11 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:14:56.483 15:02:11 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:14:56.483 15:02:11 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:14:56.483 15:02:11 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:14:56.483 15:02:11 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:14:56.483 15:02:12 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:14:56.483 15:02:12 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:14:56.483 15:02:12 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:14:56.483 15:02:12 -- json_config/json_config.sh@33 -- # declare -A app_params 00:14:56.483 15:02:12 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:14:56.483 15:02:12 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:14:56.483 15:02:12 -- json_config/json_config.sh@40 -- # last_event_id=0 00:14:56.483 15:02:12 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:14:56.483 15:02:12 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:14:56.483 INFO: JSON configuration test init 00:14:56.483 15:02:12 -- json_config/json_config.sh@357 -- # json_config_test_init 00:14:56.483 15:02:12 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:14:56.483 15:02:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:56.483 15:02:12 -- common/autotest_common.sh@10 -- # set +x 00:14:56.483 15:02:12 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:14:56.483 15:02:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:56.483 15:02:12 -- common/autotest_common.sh@10 -- # set +x 00:14:56.483 15:02:12 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:14:56.483 15:02:12 -- json_config/common.sh@9 -- # local app=target 00:14:56.483 15:02:12 -- json_config/common.sh@10 -- # shift 00:14:56.483 15:02:12 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:14:56.483 15:02:12 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:14:56.483 15:02:12 -- json_config/common.sh@15 -- # local app_extra_params= 00:14:56.483 15:02:12 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:56.483 15:02:12 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:14:56.483 Waiting for target to run... 00:14:56.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:14:56.483 15:02:12 -- json_config/common.sh@22 -- # app_pid["$app"]=60769 00:14:56.483 15:02:12 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:14:56.483 15:02:12 -- json_config/common.sh@25 -- # waitforlisten 60769 /var/tmp/spdk_tgt.sock 00:14:56.483 15:02:12 -- common/autotest_common.sh@817 -- # '[' -z 60769 ']' 00:14:56.483 15:02:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:14:56.483 15:02:12 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:14:56.483 15:02:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:56.483 15:02:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:14:56.483 15:02:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:56.483 15:02:12 -- common/autotest_common.sh@10 -- # set +x 00:14:56.483 [2024-04-18 15:02:12.083523] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:14:56.483 [2024-04-18 15:02:12.083785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60769 ] 00:14:57.051 [2024-04-18 15:02:12.485616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.051 [2024-04-18 15:02:12.564747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.310 15:02:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:57.310 00:14:57.310 15:02:13 -- common/autotest_common.sh@850 -- # return 0 00:14:57.310 15:02:13 -- json_config/common.sh@26 -- # echo '' 00:14:57.310 15:02:13 -- json_config/json_config.sh@269 -- # create_accel_config 00:14:57.310 15:02:13 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:14:57.310 15:02:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:57.310 15:02:13 -- common/autotest_common.sh@10 -- # set +x 00:14:57.568 15:02:13 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:14:57.568 15:02:13 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:14:57.568 15:02:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:57.568 15:02:13 -- common/autotest_common.sh@10 -- # set +x 00:14:57.568 15:02:13 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:14:57.568 15:02:13 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:14:57.568 15:02:13 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:14:57.827 15:02:13 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:14:57.827 15:02:13 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:14:57.827 15:02:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:57.827 15:02:13 -- common/autotest_common.sh@10 -- # set +x 00:14:57.827 15:02:13 -- json_config/json_config.sh@45 -- # local ret=0 00:14:57.827 15:02:13 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:14:57.827 15:02:13 -- json_config/json_config.sh@46 -- # local enabled_types 00:14:57.827 15:02:13 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:14:57.827 15:02:13 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:14:57.827 15:02:13 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:14:58.086 15:02:13 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:14:58.086 15:02:13 -- json_config/json_config.sh@48 -- # local get_types 00:14:58.086 15:02:13 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:14:58.086 15:02:13 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:14:58.086 15:02:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:58.086 15:02:13 -- common/autotest_common.sh@10 -- # set +x 00:14:58.086 15:02:13 -- json_config/json_config.sh@55 -- # return 0 00:14:58.086 15:02:13 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:14:58.086 15:02:13 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:14:58.086 15:02:13 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:14:58.086 15:02:13 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:14:58.086 15:02:13 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:14:58.086 15:02:13 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:14:58.086 15:02:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:58.086 15:02:13 -- common/autotest_common.sh@10 -- # set +x 00:14:58.344 15:02:13 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:14:58.344 15:02:13 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:14:58.344 15:02:13 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:14:58.344 15:02:13 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:14:58.344 15:02:13 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:14:58.344 MallocForNvmf0 00:14:58.344 15:02:14 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:14:58.344 15:02:14 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:14:58.602 MallocForNvmf1 00:14:58.602 15:02:14 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:14:58.602 15:02:14 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:14:58.860 [2024-04-18 15:02:14.489306] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.860 15:02:14 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:58.860 15:02:14 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:59.119 15:02:14 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:14:59.119 15:02:14 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:14:59.378 15:02:14 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:14:59.378 15:02:14 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:14:59.637 15:02:15 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:14:59.637 15:02:15 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:14:59.902 [2024-04-18 15:02:15.445494] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:14:59.902 15:02:15 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:14:59.902 15:02:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:59.902 15:02:15 -- common/autotest_common.sh@10 -- # set +x 00:14:59.902 15:02:15 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:14:59.902 15:02:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:59.902 15:02:15 -- common/autotest_common.sh@10 -- # set +x 00:14:59.902 15:02:15 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:14:59.902 15:02:15 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:14:59.902 15:02:15 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:15:00.179 MallocBdevForConfigChangeCheck 00:15:00.179 15:02:15 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:15:00.179 15:02:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:00.179 15:02:15 -- common/autotest_common.sh@10 -- # set +x 00:15:00.448 15:02:15 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:15:00.448 15:02:15 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:00.713 INFO: shutting down applications... 00:15:00.713 15:02:16 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:15:00.713 15:02:16 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:15:00.713 15:02:16 -- json_config/json_config.sh@368 -- # json_config_clear target 00:15:00.713 15:02:16 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:15:00.713 15:02:16 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:15:01.033 Calling clear_iscsi_subsystem 00:15:01.033 Calling clear_nvmf_subsystem 00:15:01.034 Calling clear_nbd_subsystem 00:15:01.034 Calling clear_ublk_subsystem 00:15:01.034 Calling clear_vhost_blk_subsystem 00:15:01.034 Calling clear_vhost_scsi_subsystem 00:15:01.034 Calling clear_bdev_subsystem 00:15:01.034 15:02:16 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:15:01.034 15:02:16 -- json_config/json_config.sh@343 -- # count=100 00:15:01.034 15:02:16 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:15:01.034 15:02:16 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:01.034 15:02:16 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:15:01.034 15:02:16 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:15:01.308 15:02:16 -- json_config/json_config.sh@345 -- # break 00:15:01.308 15:02:16 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:15:01.308 15:02:16 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:15:01.308 15:02:16 -- json_config/common.sh@31 -- # local app=target 00:15:01.308 15:02:16 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:01.308 15:02:16 -- json_config/common.sh@35 -- # [[ -n 60769 ]] 00:15:01.308 15:02:16 -- json_config/common.sh@38 -- # kill -SIGINT 60769 00:15:01.308 15:02:16 -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:01.308 15:02:16 -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:01.308 15:02:16 -- json_config/common.sh@41 -- # kill -0 60769 00:15:01.308 15:02:16 -- json_config/common.sh@45 -- # sleep 0.5 00:15:01.879 15:02:17 -- json_config/common.sh@40 -- # (( i++ )) 00:15:01.879 15:02:17 -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:01.879 15:02:17 -- json_config/common.sh@41 -- # kill -0 60769 00:15:01.879 15:02:17 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:01.879 15:02:17 -- json_config/common.sh@43 -- # break 00:15:01.879 15:02:17 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:01.879 15:02:17 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:01.879 SPDK target shutdown done 00:15:01.879 15:02:17 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:15:01.879 INFO: relaunching applications... 00:15:01.879 15:02:17 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:01.879 15:02:17 -- json_config/common.sh@9 -- # local app=target 00:15:01.879 15:02:17 -- json_config/common.sh@10 -- # shift 00:15:01.879 15:02:17 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:01.879 15:02:17 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:01.879 15:02:17 -- json_config/common.sh@15 -- # local app_extra_params= 00:15:01.879 15:02:17 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:01.879 15:02:17 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:01.879 15:02:17 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:01.879 15:02:17 -- json_config/common.sh@22 -- # app_pid["$app"]=61044 00:15:01.879 Waiting for target to run... 00:15:01.879 15:02:17 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:01.879 15:02:17 -- json_config/common.sh@25 -- # waitforlisten 61044 /var/tmp/spdk_tgt.sock 00:15:01.879 15:02:17 -- common/autotest_common.sh@817 -- # '[' -z 61044 ']' 00:15:01.879 15:02:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:01.879 15:02:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:01.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:01.879 15:02:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:01.879 15:02:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:01.879 15:02:17 -- common/autotest_common.sh@10 -- # set +x 00:15:01.879 [2024-04-18 15:02:17.492705] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:01.879 [2024-04-18 15:02:17.492784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61044 ] 00:15:02.446 [2024-04-18 15:02:17.914239] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.446 [2024-04-18 15:02:17.990292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.705 [2024-04-18 15:02:18.296848] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.705 [2024-04-18 15:02:18.328885] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:15:02.705 15:02:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:02.705 15:02:18 -- common/autotest_common.sh@850 -- # return 0 00:15:02.705 00:15:02.705 15:02:18 -- json_config/common.sh@26 -- # echo '' 00:15:02.705 15:02:18 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:15:02.705 INFO: Checking if target configuration is the same... 00:15:02.705 15:02:18 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:15:02.705 15:02:18 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:02.705 15:02:18 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:15:02.705 15:02:18 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:02.705 + '[' 2 -ne 2 ']' 00:15:02.705 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:15:02.705 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:15:02.705 + rootdir=/home/vagrant/spdk_repo/spdk 00:15:02.705 +++ basename /dev/fd/62 00:15:02.705 ++ mktemp /tmp/62.XXX 00:15:02.705 + tmp_file_1=/tmp/62.ACI 00:15:02.705 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:02.705 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:15:02.705 + tmp_file_2=/tmp/spdk_tgt_config.json.7ni 00:15:02.705 + ret=0 00:15:02.705 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:03.273 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:03.273 + diff -u /tmp/62.ACI /tmp/spdk_tgt_config.json.7ni 00:15:03.273 INFO: JSON config files are the same 00:15:03.273 + echo 'INFO: JSON config files are the same' 00:15:03.273 + rm /tmp/62.ACI /tmp/spdk_tgt_config.json.7ni 00:15:03.273 + exit 0 00:15:03.273 15:02:18 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:15:03.273 INFO: changing configuration and checking if this can be detected... 00:15:03.273 15:02:18 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:15:03.273 15:02:18 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:15:03.273 15:02:18 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:15:03.273 15:02:18 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:03.273 15:02:18 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:15:03.273 15:02:18 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:03.533 + '[' 2 -ne 2 ']' 00:15:03.533 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:15:03.533 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:15:03.533 + rootdir=/home/vagrant/spdk_repo/spdk 00:15:03.533 +++ basename /dev/fd/62 00:15:03.533 ++ mktemp /tmp/62.XXX 00:15:03.533 + tmp_file_1=/tmp/62.5BV 00:15:03.533 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:03.533 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:15:03.533 + tmp_file_2=/tmp/spdk_tgt_config.json.9p8 00:15:03.533 + ret=0 00:15:03.533 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:03.807 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:03.807 + diff -u /tmp/62.5BV /tmp/spdk_tgt_config.json.9p8 00:15:03.807 + ret=1 00:15:03.807 + echo '=== Start of file: /tmp/62.5BV ===' 00:15:03.807 + cat /tmp/62.5BV 00:15:03.807 + echo '=== End of file: /tmp/62.5BV ===' 00:15:03.807 + echo '' 00:15:03.807 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9p8 ===' 00:15:03.807 + cat /tmp/spdk_tgt_config.json.9p8 00:15:03.807 + echo '=== End of file: /tmp/spdk_tgt_config.json.9p8 ===' 00:15:03.807 + echo '' 00:15:03.807 + rm /tmp/62.5BV /tmp/spdk_tgt_config.json.9p8 00:15:03.807 + exit 1 00:15:03.807 15:02:19 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:15:03.807 INFO: configuration change detected. 00:15:03.807 15:02:19 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:15:03.807 15:02:19 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:15:03.807 15:02:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:03.807 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:15:03.807 15:02:19 -- json_config/json_config.sh@307 -- # local ret=0 00:15:03.807 15:02:19 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:15:03.807 15:02:19 -- json_config/json_config.sh@317 -- # [[ -n 61044 ]] 00:15:03.807 15:02:19 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:15:03.807 15:02:19 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:15:03.807 15:02:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:03.807 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:15:03.807 15:02:19 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:15:03.807 15:02:19 -- json_config/json_config.sh@193 -- # uname -s 00:15:03.807 15:02:19 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:15:03.807 15:02:19 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:15:03.807 15:02:19 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:15:03.807 15:02:19 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:15:03.807 15:02:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:03.807 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:15:03.807 15:02:19 -- json_config/json_config.sh@323 -- # killprocess 61044 00:15:03.807 15:02:19 -- common/autotest_common.sh@936 -- # '[' -z 61044 ']' 00:15:03.807 15:02:19 -- common/autotest_common.sh@940 -- # kill -0 61044 00:15:03.807 15:02:19 -- common/autotest_common.sh@941 -- # uname 00:15:03.807 15:02:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:03.807 15:02:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61044 00:15:04.066 15:02:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:04.066 15:02:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:04.066 killing process with pid 61044 00:15:04.066 15:02:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61044' 00:15:04.066 15:02:19 -- common/autotest_common.sh@955 -- # kill 61044 00:15:04.066 15:02:19 -- common/autotest_common.sh@960 -- # wait 61044 00:15:04.325 15:02:19 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:04.325 15:02:19 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:15:04.325 15:02:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:04.325 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:15:04.325 15:02:19 -- json_config/json_config.sh@328 -- # return 0 00:15:04.325 INFO: Success 00:15:04.325 15:02:19 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:15:04.325 00:15:04.325 real 0m7.983s 00:15:04.325 user 0m10.874s 00:15:04.325 sys 0m2.154s 00:15:04.325 ************************************ 00:15:04.325 15:02:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:04.325 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:15:04.325 END TEST json_config 00:15:04.325 ************************************ 00:15:04.325 15:02:19 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:04.325 15:02:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:04.325 15:02:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:04.325 15:02:19 -- common/autotest_common.sh@10 -- # set +x 00:15:04.325 ************************************ 00:15:04.325 START TEST json_config_extra_key 00:15:04.325 ************************************ 00:15:04.325 15:02:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:04.584 15:02:20 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:04.584 15:02:20 -- nvmf/common.sh@7 -- # uname -s 00:15:04.584 15:02:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.584 15:02:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.584 15:02:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.584 15:02:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.584 15:02:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.584 15:02:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.584 15:02:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.584 15:02:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.584 15:02:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.584 15:02:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.584 15:02:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:15:04.584 15:02:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:15:04.584 15:02:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.584 15:02:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.584 15:02:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:04.584 15:02:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.584 15:02:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:04.584 15:02:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.584 15:02:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.584 15:02:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.584 15:02:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.585 15:02:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.585 15:02:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.585 15:02:20 -- paths/export.sh@5 -- # export PATH 00:15:04.585 15:02:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.585 15:02:20 -- nvmf/common.sh@47 -- # : 0 00:15:04.585 15:02:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:04.585 15:02:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:04.585 15:02:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.585 15:02:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.585 15:02:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.585 15:02:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:04.585 15:02:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:04.585 15:02:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:04.585 15:02:20 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:04.585 15:02:20 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:15:04.585 15:02:20 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:15:04.585 15:02:20 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:15:04.585 15:02:20 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:15:04.585 15:02:20 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:15:04.585 15:02:20 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:15:04.585 15:02:20 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:15:04.585 15:02:20 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:15:04.585 15:02:20 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:04.585 INFO: launching applications... 00:15:04.585 15:02:20 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:15:04.585 15:02:20 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:04.585 15:02:20 -- json_config/common.sh@9 -- # local app=target 00:15:04.585 15:02:20 -- json_config/common.sh@10 -- # shift 00:15:04.585 15:02:20 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:04.585 15:02:20 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:04.585 15:02:20 -- json_config/common.sh@15 -- # local app_extra_params= 00:15:04.585 15:02:20 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:04.585 15:02:20 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:04.585 15:02:20 -- json_config/common.sh@22 -- # app_pid["$app"]=61220 00:15:04.585 Waiting for target to run... 00:15:04.585 15:02:20 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:04.585 15:02:20 -- json_config/common.sh@25 -- # waitforlisten 61220 /var/tmp/spdk_tgt.sock 00:15:04.585 15:02:20 -- common/autotest_common.sh@817 -- # '[' -z 61220 ']' 00:15:04.585 15:02:20 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:04.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:04.585 15:02:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:04.585 15:02:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:04.585 15:02:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:04.585 15:02:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:04.585 15:02:20 -- common/autotest_common.sh@10 -- # set +x 00:15:04.585 [2024-04-18 15:02:20.211592] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:04.585 [2024-04-18 15:02:20.211702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61220 ] 00:15:05.152 [2024-04-18 15:02:20.607790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.152 [2024-04-18 15:02:20.685755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.410 15:02:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:05.410 00:15:05.410 15:02:21 -- common/autotest_common.sh@850 -- # return 0 00:15:05.410 15:02:21 -- json_config/common.sh@26 -- # echo '' 00:15:05.410 INFO: shutting down applications... 00:15:05.410 15:02:21 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:15:05.410 15:02:21 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:15:05.410 15:02:21 -- json_config/common.sh@31 -- # local app=target 00:15:05.410 15:02:21 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:05.410 15:02:21 -- json_config/common.sh@35 -- # [[ -n 61220 ]] 00:15:05.410 15:02:21 -- json_config/common.sh@38 -- # kill -SIGINT 61220 00:15:05.410 15:02:21 -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:05.410 15:02:21 -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:05.410 15:02:21 -- json_config/common.sh@41 -- # kill -0 61220 00:15:05.410 15:02:21 -- json_config/common.sh@45 -- # sleep 0.5 00:15:05.977 15:02:21 -- json_config/common.sh@40 -- # (( i++ )) 00:15:05.977 15:02:21 -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:05.977 15:02:21 -- json_config/common.sh@41 -- # kill -0 61220 00:15:05.977 15:02:21 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:05.977 15:02:21 -- json_config/common.sh@43 -- # break 00:15:05.977 15:02:21 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:05.977 SPDK target shutdown done 00:15:05.977 15:02:21 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:05.977 Success 00:15:05.977 15:02:21 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:15:05.977 00:15:05.977 real 0m1.550s 00:15:05.977 user 0m1.342s 00:15:05.977 sys 0m0.440s 00:15:05.977 15:02:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:05.977 15:02:21 -- common/autotest_common.sh@10 -- # set +x 00:15:05.977 ************************************ 00:15:05.977 END TEST json_config_extra_key 00:15:05.977 ************************************ 00:15:05.977 15:02:21 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:05.977 15:02:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:05.977 15:02:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:05.977 15:02:21 -- common/autotest_common.sh@10 -- # set +x 00:15:06.235 ************************************ 00:15:06.235 START TEST alias_rpc 00:15:06.235 ************************************ 00:15:06.235 15:02:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:06.235 * Looking for test storage... 00:15:06.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:15:06.235 15:02:21 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:06.235 15:02:21 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61307 00:15:06.235 15:02:21 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:06.235 15:02:21 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61307 00:15:06.235 15:02:21 -- common/autotest_common.sh@817 -- # '[' -z 61307 ']' 00:15:06.235 15:02:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.236 15:02:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:06.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.236 15:02:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.236 15:02:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:06.236 15:02:21 -- common/autotest_common.sh@10 -- # set +x 00:15:06.236 [2024-04-18 15:02:21.929984] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:06.236 [2024-04-18 15:02:21.930618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61307 ] 00:15:06.494 [2024-04-18 15:02:22.071281] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.494 [2024-04-18 15:02:22.155064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.452 15:02:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:07.452 15:02:22 -- common/autotest_common.sh@850 -- # return 0 00:15:07.452 15:02:22 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:15:07.452 15:02:23 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61307 00:15:07.452 15:02:23 -- common/autotest_common.sh@936 -- # '[' -z 61307 ']' 00:15:07.452 15:02:23 -- common/autotest_common.sh@940 -- # kill -0 61307 00:15:07.452 15:02:23 -- common/autotest_common.sh@941 -- # uname 00:15:07.452 15:02:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:07.452 15:02:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61307 00:15:07.452 15:02:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:07.452 15:02:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:07.452 killing process with pid 61307 00:15:07.452 15:02:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61307' 00:15:07.452 15:02:23 -- common/autotest_common.sh@955 -- # kill 61307 00:15:07.452 15:02:23 -- common/autotest_common.sh@960 -- # wait 61307 00:15:08.018 00:15:08.018 real 0m1.767s 00:15:08.018 user 0m1.854s 00:15:08.018 sys 0m0.486s 00:15:08.018 15:02:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:08.018 15:02:23 -- common/autotest_common.sh@10 -- # set +x 00:15:08.018 ************************************ 00:15:08.018 END TEST alias_rpc 00:15:08.018 ************************************ 00:15:08.018 15:02:23 -- spdk/autotest.sh@172 -- # [[ 1 -eq 0 ]] 00:15:08.018 15:02:23 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:08.018 15:02:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:08.018 15:02:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:08.018 15:02:23 -- common/autotest_common.sh@10 -- # set +x 00:15:08.018 ************************************ 00:15:08.018 START TEST dpdk_mem_utility 00:15:08.018 ************************************ 00:15:08.018 15:02:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:08.277 * Looking for test storage... 00:15:08.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:15:08.277 15:02:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:08.277 15:02:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61404 00:15:08.277 15:02:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61404 00:15:08.277 15:02:23 -- common/autotest_common.sh@817 -- # '[' -z 61404 ']' 00:15:08.277 15:02:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.277 15:02:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:08.277 15:02:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.277 15:02:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:08.277 15:02:23 -- common/autotest_common.sh@10 -- # set +x 00:15:08.277 15:02:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.277 [2024-04-18 15:02:23.820194] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:08.277 [2024-04-18 15:02:23.820960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61404 ] 00:15:08.277 [2024-04-18 15:02:23.963374] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.536 [2024-04-18 15:02:24.055199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.104 15:02:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:09.104 15:02:24 -- common/autotest_common.sh@850 -- # return 0 00:15:09.104 15:02:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:15:09.104 15:02:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:15:09.104 15:02:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.104 15:02:24 -- common/autotest_common.sh@10 -- # set +x 00:15:09.104 { 00:15:09.104 "filename": "/tmp/spdk_mem_dump.txt" 00:15:09.104 } 00:15:09.104 15:02:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.104 15:02:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:09.104 DPDK memory size 814.000000 MiB in 1 heap(s) 00:15:09.104 1 heaps totaling size 814.000000 MiB 00:15:09.104 size: 814.000000 MiB heap id: 0 00:15:09.104 end heaps---------- 00:15:09.104 8 mempools totaling size 598.116089 MiB 00:15:09.104 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:15:09.104 size: 158.602051 MiB name: PDU_data_out_Pool 00:15:09.104 size: 84.521057 MiB name: bdev_io_61404 00:15:09.104 size: 51.011292 MiB name: evtpool_61404 00:15:09.104 size: 50.003479 MiB name: msgpool_61404 00:15:09.104 size: 21.763794 MiB name: PDU_Pool 00:15:09.104 size: 19.513306 MiB name: SCSI_TASK_Pool 00:15:09.104 size: 0.026123 MiB name: Session_Pool 00:15:09.104 end mempools------- 00:15:09.104 6 memzones totaling size 4.142822 MiB 00:15:09.104 size: 1.000366 MiB name: RG_ring_0_61404 00:15:09.104 size: 1.000366 MiB name: RG_ring_1_61404 00:15:09.104 size: 1.000366 MiB name: RG_ring_4_61404 00:15:09.104 size: 1.000366 MiB name: RG_ring_5_61404 00:15:09.104 size: 0.125366 MiB name: RG_ring_2_61404 00:15:09.104 size: 0.015991 MiB name: RG_ring_3_61404 00:15:09.104 end memzones------- 00:15:09.104 15:02:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:15:09.364 heap id: 0 total size: 814.000000 MiB number of busy elements: 222 number of free elements: 15 00:15:09.364 list of free elements. size: 12.486206 MiB 00:15:09.364 element at address: 0x200000400000 with size: 1.999512 MiB 00:15:09.364 element at address: 0x200018e00000 with size: 0.999878 MiB 00:15:09.364 element at address: 0x200019000000 with size: 0.999878 MiB 00:15:09.364 element at address: 0x200003e00000 with size: 0.996277 MiB 00:15:09.364 element at address: 0x200031c00000 with size: 0.994446 MiB 00:15:09.364 element at address: 0x200013800000 with size: 0.978699 MiB 00:15:09.364 element at address: 0x200007000000 with size: 0.959839 MiB 00:15:09.364 element at address: 0x200019200000 with size: 0.936584 MiB 00:15:09.364 element at address: 0x200000200000 with size: 0.837036 MiB 00:15:09.364 element at address: 0x20001aa00000 with size: 0.572266 MiB 00:15:09.364 element at address: 0x20000b200000 with size: 0.489807 MiB 00:15:09.364 element at address: 0x200000800000 with size: 0.487061 MiB 00:15:09.364 element at address: 0x200019400000 with size: 0.485657 MiB 00:15:09.364 element at address: 0x200027e00000 with size: 0.398499 MiB 00:15:09.364 element at address: 0x200003a00000 with size: 0.350769 MiB 00:15:09.364 list of standard malloc elements. size: 199.251221 MiB 00:15:09.364 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:15:09.364 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:15:09.364 element at address: 0x200018efff80 with size: 1.000122 MiB 00:15:09.364 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:15:09.364 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:15:09.364 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:15:09.364 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:15:09.364 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:15:09.364 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:15:09.364 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:15:09.364 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:15:09.364 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:15:09.364 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:15:09.364 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:15:09.364 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:15:09.364 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:15:09.364 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:15:09.364 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:15:09.364 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:15:09.364 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003adb300 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003adb500 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003affa80 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003affb40 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:15:09.365 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:15:09.365 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:15:09.365 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e66040 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e66100 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6cd00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:15:09.365 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:15:09.366 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:15:09.366 list of memzone associated elements. size: 602.262573 MiB 00:15:09.366 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:15:09.366 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:15:09.366 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:15:09.366 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:15:09.366 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:15:09.366 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61404_0 00:15:09.366 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:15:09.366 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61404_0 00:15:09.366 element at address: 0x200003fff380 with size: 48.003052 MiB 00:15:09.366 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61404_0 00:15:09.366 element at address: 0x2000195be940 with size: 20.255554 MiB 00:15:09.366 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:15:09.366 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:15:09.366 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:15:09.366 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:15:09.366 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61404 00:15:09.366 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:15:09.366 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61404 00:15:09.366 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:15:09.366 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61404 00:15:09.366 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:15:09.366 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:15:09.366 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:15:09.366 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:15:09.366 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:15:09.366 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:15:09.366 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:15:09.366 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:15:09.366 element at address: 0x200003eff180 with size: 1.000488 MiB 00:15:09.366 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61404 00:15:09.366 element at address: 0x200003affc00 with size: 1.000488 MiB 00:15:09.366 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61404 00:15:09.366 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:15:09.366 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61404 00:15:09.366 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:15:09.366 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61404 00:15:09.366 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:15:09.366 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61404 00:15:09.366 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:15:09.366 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:15:09.366 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:15:09.366 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:15:09.366 element at address: 0x20001947c540 with size: 0.250488 MiB 00:15:09.366 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:15:09.366 element at address: 0x200003adf880 with size: 0.125488 MiB 00:15:09.366 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61404 00:15:09.366 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:15:09.366 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:15:09.366 element at address: 0x200027e661c0 with size: 0.023743 MiB 00:15:09.366 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:15:09.366 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:15:09.366 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61404 00:15:09.366 element at address: 0x200027e6c300 with size: 0.002441 MiB 00:15:09.366 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:15:09.366 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:15:09.366 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61404 00:15:09.366 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:15:09.366 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61404 00:15:09.366 element at address: 0x200027e6cdc0 with size: 0.000305 MiB 00:15:09.366 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:15:09.366 15:02:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:15:09.366 15:02:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61404 00:15:09.366 15:02:24 -- common/autotest_common.sh@936 -- # '[' -z 61404 ']' 00:15:09.366 15:02:24 -- common/autotest_common.sh@940 -- # kill -0 61404 00:15:09.366 15:02:24 -- common/autotest_common.sh@941 -- # uname 00:15:09.366 15:02:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:09.366 15:02:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61404 00:15:09.366 killing process with pid 61404 00:15:09.366 15:02:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:09.366 15:02:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:09.366 15:02:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61404' 00:15:09.366 15:02:24 -- common/autotest_common.sh@955 -- # kill 61404 00:15:09.366 15:02:24 -- common/autotest_common.sh@960 -- # wait 61404 00:15:09.643 ************************************ 00:15:09.643 END TEST dpdk_mem_utility 00:15:09.643 ************************************ 00:15:09.643 00:15:09.643 real 0m1.628s 00:15:09.643 user 0m1.629s 00:15:09.643 sys 0m0.466s 00:15:09.643 15:02:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:09.643 15:02:25 -- common/autotest_common.sh@10 -- # set +x 00:15:09.643 15:02:25 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:09.643 15:02:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:09.643 15:02:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:09.643 15:02:25 -- common/autotest_common.sh@10 -- # set +x 00:15:09.902 ************************************ 00:15:09.902 START TEST event 00:15:09.902 ************************************ 00:15:09.902 15:02:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:09.902 * Looking for test storage... 00:15:09.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:09.902 15:02:25 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:09.902 15:02:25 -- bdev/nbd_common.sh@6 -- # set -e 00:15:09.902 15:02:25 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:09.902 15:02:25 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:15:09.902 15:02:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:09.902 15:02:25 -- common/autotest_common.sh@10 -- # set +x 00:15:10.161 ************************************ 00:15:10.161 START TEST event_perf 00:15:10.161 ************************************ 00:15:10.161 15:02:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:10.161 Running I/O for 1 seconds...[2024-04-18 15:02:25.698976] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:10.161 [2024-04-18 15:02:25.699222] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61497 ] 00:15:10.161 [2024-04-18 15:02:25.843054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.419 [2024-04-18 15:02:25.936328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.419 [2024-04-18 15:02:25.936394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.419 [2024-04-18 15:02:25.936446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.419 [2024-04-18 15:02:25.936448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.372 Running I/O for 1 seconds... 00:15:11.372 lcore 0: 181367 00:15:11.372 lcore 1: 181368 00:15:11.372 lcore 2: 181367 00:15:11.372 lcore 3: 181367 00:15:11.372 done. 00:15:11.372 ************************************ 00:15:11.372 END TEST event_perf 00:15:11.372 ************************************ 00:15:11.372 00:15:11.372 real 0m1.363s 00:15:11.372 user 0m4.171s 00:15:11.372 sys 0m0.067s 00:15:11.372 15:02:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:11.372 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:15:11.630 15:02:27 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:11.630 15:02:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:11.630 15:02:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:11.630 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:15:11.630 ************************************ 00:15:11.630 START TEST event_reactor 00:15:11.630 ************************************ 00:15:11.630 15:02:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:11.631 [2024-04-18 15:02:27.222152] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:11.631 [2024-04-18 15:02:27.222287] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61545 ] 00:15:11.888 [2024-04-18 15:02:27.370434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.888 [2024-04-18 15:02:27.479615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.282 test_start 00:15:13.282 oneshot 00:15:13.282 tick 100 00:15:13.282 tick 100 00:15:13.282 tick 250 00:15:13.282 tick 100 00:15:13.282 tick 100 00:15:13.282 tick 100 00:15:13.282 tick 250 00:15:13.282 tick 500 00:15:13.282 tick 100 00:15:13.282 tick 100 00:15:13.282 tick 250 00:15:13.282 tick 100 00:15:13.282 tick 100 00:15:13.282 test_end 00:15:13.282 00:15:13.282 real 0m1.378s 00:15:13.282 user 0m1.205s 00:15:13.282 sys 0m0.066s 00:15:13.282 15:02:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:13.282 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:15:13.282 ************************************ 00:15:13.282 END TEST event_reactor 00:15:13.282 ************************************ 00:15:13.282 15:02:28 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:13.282 15:02:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:13.282 15:02:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:13.282 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:15:13.282 ************************************ 00:15:13.282 START TEST event_reactor_perf 00:15:13.282 ************************************ 00:15:13.282 15:02:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:13.282 [2024-04-18 15:02:28.746386] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:13.282 [2024-04-18 15:02:28.746482] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61579 ] 00:15:13.282 [2024-04-18 15:02:28.889303] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.282 [2024-04-18 15:02:28.973392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.662 test_start 00:15:14.662 test_end 00:15:14.662 Performance: 471758 events per second 00:15:14.662 00:15:14.662 real 0m1.363s 00:15:14.662 user 0m1.199s 00:15:14.662 sys 0m0.056s 00:15:14.662 15:02:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:14.662 ************************************ 00:15:14.662 END TEST event_reactor_perf 00:15:14.662 ************************************ 00:15:14.662 15:02:30 -- common/autotest_common.sh@10 -- # set +x 00:15:14.662 15:02:30 -- event/event.sh@49 -- # uname -s 00:15:14.662 15:02:30 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:15:14.662 15:02:30 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:14.662 15:02:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:14.662 15:02:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:14.662 15:02:30 -- common/autotest_common.sh@10 -- # set +x 00:15:14.662 ************************************ 00:15:14.662 START TEST event_scheduler 00:15:14.662 ************************************ 00:15:14.662 15:02:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:14.662 * Looking for test storage... 00:15:14.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:15:14.662 15:02:30 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:15:14.662 15:02:30 -- scheduler/scheduler.sh@35 -- # scheduler_pid=61651 00:15:14.662 15:02:30 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:15:14.662 15:02:30 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:15:14.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.922 15:02:30 -- scheduler/scheduler.sh@37 -- # waitforlisten 61651 00:15:14.922 15:02:30 -- common/autotest_common.sh@817 -- # '[' -z 61651 ']' 00:15:14.922 15:02:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.922 15:02:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:14.922 15:02:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.922 15:02:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:14.922 15:02:30 -- common/autotest_common.sh@10 -- # set +x 00:15:14.922 [2024-04-18 15:02:30.413321] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:14.922 [2024-04-18 15:02:30.413647] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61651 ] 00:15:14.922 [2024-04-18 15:02:30.559708] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:15.180 [2024-04-18 15:02:30.661873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.180 [2024-04-18 15:02:30.662034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.180 [2024-04-18 15:02:30.662035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.180 [2024-04-18 15:02:30.661957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.745 15:02:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:15.745 15:02:31 -- common/autotest_common.sh@850 -- # return 0 00:15:15.745 15:02:31 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:15:15.745 15:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.745 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:15.745 POWER: Env isn't set yet! 00:15:15.745 POWER: Attempting to initialise ACPI cpufreq power management... 00:15:15.745 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:15.745 POWER: Cannot set governor of lcore 0 to userspace 00:15:15.745 POWER: Attempting to initialise PSTAT power management... 00:15:15.745 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:15.745 POWER: Cannot set governor of lcore 0 to performance 00:15:15.745 POWER: Attempting to initialise AMD PSTATE power management... 00:15:15.745 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:15.745 POWER: Cannot set governor of lcore 0 to userspace 00:15:15.745 POWER: Attempting to initialise CPPC power management... 00:15:15.745 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:15.745 POWER: Cannot set governor of lcore 0 to userspace 00:15:15.745 POWER: Attempting to initialise VM power management... 00:15:15.745 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:15:15.745 POWER: Unable to set Power Management Environment for lcore 0 00:15:15.745 [2024-04-18 15:02:31.319320] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:15:15.745 [2024-04-18 15:02:31.319341] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:15:15.745 [2024-04-18 15:02:31.319355] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:15:15.745 15:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.745 15:02:31 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:15:15.745 15:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:15.745 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:15.745 [2024-04-18 15:02:31.407942] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:15:15.745 15:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:15.745 15:02:31 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:15:15.745 15:02:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:15.745 15:02:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:15.745 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 ************************************ 00:15:16.003 START TEST scheduler_create_thread 00:15:16.003 ************************************ 00:15:16.003 15:02:31 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:15:16.003 15:02:31 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:15:16.003 15:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:16.003 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 2 00:15:16.003 15:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:16.003 15:02:31 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:15:16.003 15:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:16.003 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 3 00:15:16.003 15:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:16.003 15:02:31 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:15:16.003 15:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:16.003 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 4 00:15:16.003 15:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:16.003 15:02:31 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:15:16.003 15:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:16.003 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 5 00:15:16.003 15:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:16.003 15:02:31 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:15:16.003 15:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:16.003 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 6 00:15:16.003 15:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:16.003 15:02:31 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:15:16.003 15:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:16.003 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 7 00:15:16.003 15:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:16.003 15:02:31 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:15:16.003 15:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:16.003 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:16.003 8 00:15:16.003 15:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:16.003 15:02:31 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:15:16.003 15:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:16.003 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:16.004 9 00:15:16.004 15:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:16.004 15:02:31 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:15:16.004 15:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:16.004 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:16.004 10 00:15:16.004 15:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:16.004 15:02:31 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:15:16.004 15:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:16.004 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:16.004 15:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:16.004 15:02:31 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:15:16.004 15:02:31 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:15:16.004 15:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:16.004 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:16.004 15:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:16.004 15:02:31 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:15:16.004 15:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:16.004 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:16.004 15:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:16.004 15:02:31 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:15:16.004 15:02:31 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:15:16.004 15:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:16.004 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:15:17.380 ************************************ 00:15:17.380 END TEST scheduler_create_thread 00:15:17.380 ************************************ 00:15:17.380 15:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:17.380 00:15:17.380 real 0m1.171s 00:15:17.380 user 0m0.013s 00:15:17.380 sys 0m0.004s 00:15:17.380 15:02:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:17.380 15:02:32 -- common/autotest_common.sh@10 -- # set +x 00:15:17.380 15:02:32 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:15:17.380 15:02:32 -- scheduler/scheduler.sh@46 -- # killprocess 61651 00:15:17.380 15:02:32 -- common/autotest_common.sh@936 -- # '[' -z 61651 ']' 00:15:17.380 15:02:32 -- common/autotest_common.sh@940 -- # kill -0 61651 00:15:17.380 15:02:32 -- common/autotest_common.sh@941 -- # uname 00:15:17.380 15:02:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:17.380 15:02:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61651 00:15:17.380 killing process with pid 61651 00:15:17.380 15:02:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:17.380 15:02:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:17.380 15:02:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61651' 00:15:17.380 15:02:32 -- common/autotest_common.sh@955 -- # kill 61651 00:15:17.380 15:02:32 -- common/autotest_common.sh@960 -- # wait 61651 00:15:17.650 [2024-04-18 15:02:33.123289] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:15:17.650 00:15:17.650 real 0m3.113s 00:15:17.650 user 0m5.293s 00:15:17.650 sys 0m0.464s 00:15:17.650 15:02:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:17.650 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:15:17.650 ************************************ 00:15:17.650 END TEST event_scheduler 00:15:17.650 ************************************ 00:15:17.908 15:02:33 -- event/event.sh@51 -- # modprobe -n nbd 00:15:17.908 15:02:33 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:15:17.908 15:02:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:17.908 15:02:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:17.908 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:15:17.908 ************************************ 00:15:17.908 START TEST app_repeat 00:15:17.908 ************************************ 00:15:17.908 15:02:33 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:15:17.908 15:02:33 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.908 15:02:33 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:17.908 15:02:33 -- event/event.sh@13 -- # local nbd_list 00:15:17.908 15:02:33 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:17.908 15:02:33 -- event/event.sh@14 -- # local bdev_list 00:15:17.908 15:02:33 -- event/event.sh@15 -- # local repeat_times=4 00:15:17.908 15:02:33 -- event/event.sh@17 -- # modprobe nbd 00:15:17.908 15:02:33 -- event/event.sh@19 -- # repeat_pid=61762 00:15:17.908 15:02:33 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:15:17.908 15:02:33 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:15:17.908 15:02:33 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61762' 00:15:17.908 Process app_repeat pid: 61762 00:15:17.908 spdk_app_start Round 0 00:15:17.908 15:02:33 -- event/event.sh@23 -- # for i in {0..2} 00:15:17.908 15:02:33 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:15:17.908 15:02:33 -- event/event.sh@25 -- # waitforlisten 61762 /var/tmp/spdk-nbd.sock 00:15:17.908 15:02:33 -- common/autotest_common.sh@817 -- # '[' -z 61762 ']' 00:15:17.908 15:02:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:17.908 15:02:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:17.908 15:02:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:17.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:17.908 15:02:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:17.908 15:02:33 -- common/autotest_common.sh@10 -- # set +x 00:15:17.908 [2024-04-18 15:02:33.557218] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:17.908 [2024-04-18 15:02:33.557344] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61762 ] 00:15:18.165 [2024-04-18 15:02:33.700210] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:18.165 [2024-04-18 15:02:33.797216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.165 [2024-04-18 15:02:33.797216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.099 15:02:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:19.099 15:02:34 -- common/autotest_common.sh@850 -- # return 0 00:15:19.100 15:02:34 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:19.100 Malloc0 00:15:19.100 15:02:34 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:19.357 Malloc1 00:15:19.357 15:02:34 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:19.357 15:02:34 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.357 15:02:34 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:19.357 15:02:34 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:19.357 15:02:34 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:19.357 15:02:34 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:19.357 15:02:34 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:19.357 15:02:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.357 15:02:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:19.357 15:02:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.357 15:02:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:19.357 15:02:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.357 15:02:34 -- bdev/nbd_common.sh@12 -- # local i 00:15:19.357 15:02:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.357 15:02:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:19.357 15:02:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:19.616 /dev/nbd0 00:15:19.616 15:02:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:19.616 15:02:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:19.616 15:02:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:15:19.616 15:02:35 -- common/autotest_common.sh@855 -- # local i 00:15:19.616 15:02:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:19.616 15:02:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:19.616 15:02:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:15:19.616 15:02:35 -- common/autotest_common.sh@859 -- # break 00:15:19.616 15:02:35 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:19.616 15:02:35 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:19.616 15:02:35 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:19.616 1+0 records in 00:15:19.616 1+0 records out 00:15:19.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000166564 s, 24.6 MB/s 00:15:19.616 15:02:35 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:19.616 15:02:35 -- common/autotest_common.sh@872 -- # size=4096 00:15:19.616 15:02:35 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:19.616 15:02:35 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:19.616 15:02:35 -- common/autotest_common.sh@875 -- # return 0 00:15:19.616 15:02:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.616 15:02:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:19.616 15:02:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:19.875 /dev/nbd1 00:15:19.875 15:02:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:19.875 15:02:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:19.875 15:02:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:15:19.875 15:02:35 -- common/autotest_common.sh@855 -- # local i 00:15:19.875 15:02:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:19.875 15:02:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:19.875 15:02:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:15:19.875 15:02:35 -- common/autotest_common.sh@859 -- # break 00:15:19.875 15:02:35 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:19.875 15:02:35 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:19.875 15:02:35 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:19.875 1+0 records in 00:15:19.875 1+0 records out 00:15:19.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509644 s, 8.0 MB/s 00:15:19.875 15:02:35 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:19.875 15:02:35 -- common/autotest_common.sh@872 -- # size=4096 00:15:19.875 15:02:35 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:19.875 15:02:35 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:19.875 15:02:35 -- common/autotest_common.sh@875 -- # return 0 00:15:19.875 15:02:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.875 15:02:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:19.875 15:02:35 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:19.875 15:02:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.875 15:02:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:20.135 { 00:15:20.135 "bdev_name": "Malloc0", 00:15:20.135 "nbd_device": "/dev/nbd0" 00:15:20.135 }, 00:15:20.135 { 00:15:20.135 "bdev_name": "Malloc1", 00:15:20.135 "nbd_device": "/dev/nbd1" 00:15:20.135 } 00:15:20.135 ]' 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:20.135 { 00:15:20.135 "bdev_name": "Malloc0", 00:15:20.135 "nbd_device": "/dev/nbd0" 00:15:20.135 }, 00:15:20.135 { 00:15:20.135 "bdev_name": "Malloc1", 00:15:20.135 "nbd_device": "/dev/nbd1" 00:15:20.135 } 00:15:20.135 ]' 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:20.135 /dev/nbd1' 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:20.135 /dev/nbd1' 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@65 -- # count=2 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@66 -- # echo 2 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@95 -- # count=2 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:20.135 256+0 records in 00:15:20.135 256+0 records out 00:15:20.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011814 s, 88.8 MB/s 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:20.135 256+0 records in 00:15:20.135 256+0 records out 00:15:20.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270407 s, 38.8 MB/s 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:20.135 256+0 records in 00:15:20.135 256+0 records out 00:15:20.135 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259746 s, 40.4 MB/s 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@51 -- # local i 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.135 15:02:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:20.394 15:02:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.394 15:02:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.394 15:02:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.394 15:02:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.394 15:02:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.394 15:02:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.394 15:02:35 -- bdev/nbd_common.sh@41 -- # break 00:15:20.394 15:02:35 -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.394 15:02:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.394 15:02:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:20.652 15:02:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:20.652 15:02:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:20.652 15:02:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:20.652 15:02:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.652 15:02:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.652 15:02:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:20.652 15:02:36 -- bdev/nbd_common.sh@41 -- # break 00:15:20.652 15:02:36 -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.652 15:02:36 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:20.652 15:02:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:20.652 15:02:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:20.911 15:02:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:20.911 15:02:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:20.911 15:02:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:20.911 15:02:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:20.911 15:02:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:20.911 15:02:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:20.911 15:02:36 -- bdev/nbd_common.sh@65 -- # true 00:15:20.911 15:02:36 -- bdev/nbd_common.sh@65 -- # count=0 00:15:20.911 15:02:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:20.911 15:02:36 -- bdev/nbd_common.sh@104 -- # count=0 00:15:20.911 15:02:36 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:20.911 15:02:36 -- bdev/nbd_common.sh@109 -- # return 0 00:15:20.911 15:02:36 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:21.170 15:02:36 -- event/event.sh@35 -- # sleep 3 00:15:21.428 [2024-04-18 15:02:36.924371] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:21.428 [2024-04-18 15:02:37.020282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.428 [2024-04-18 15:02:37.020284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.428 [2024-04-18 15:02:37.066112] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:21.428 [2024-04-18 15:02:37.066179] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:24.751 15:02:39 -- event/event.sh@23 -- # for i in {0..2} 00:15:24.751 spdk_app_start Round 1 00:15:24.751 15:02:39 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:15:24.751 15:02:39 -- event/event.sh@25 -- # waitforlisten 61762 /var/tmp/spdk-nbd.sock 00:15:24.751 15:02:39 -- common/autotest_common.sh@817 -- # '[' -z 61762 ']' 00:15:24.751 15:02:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:24.751 15:02:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:24.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:24.751 15:02:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:24.751 15:02:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:24.751 15:02:39 -- common/autotest_common.sh@10 -- # set +x 00:15:24.751 15:02:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:24.751 15:02:39 -- common/autotest_common.sh@850 -- # return 0 00:15:24.751 15:02:39 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:24.751 Malloc0 00:15:24.751 15:02:40 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:24.751 Malloc1 00:15:24.751 15:02:40 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:24.751 15:02:40 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:24.751 15:02:40 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:24.751 15:02:40 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:24.751 15:02:40 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:24.751 15:02:40 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:24.751 15:02:40 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:24.751 15:02:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:24.751 15:02:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:24.751 15:02:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:24.751 15:02:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:24.751 15:02:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:24.751 15:02:40 -- bdev/nbd_common.sh@12 -- # local i 00:15:24.751 15:02:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:24.751 15:02:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:24.751 15:02:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:25.010 /dev/nbd0 00:15:25.010 15:02:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:25.010 15:02:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:25.010 15:02:40 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:15:25.010 15:02:40 -- common/autotest_common.sh@855 -- # local i 00:15:25.010 15:02:40 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:25.010 15:02:40 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:25.010 15:02:40 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:15:25.010 15:02:40 -- common/autotest_common.sh@859 -- # break 00:15:25.010 15:02:40 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:25.010 15:02:40 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:25.010 15:02:40 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:25.010 1+0 records in 00:15:25.010 1+0 records out 00:15:25.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209548 s, 19.5 MB/s 00:15:25.010 15:02:40 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:25.010 15:02:40 -- common/autotest_common.sh@872 -- # size=4096 00:15:25.010 15:02:40 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:25.010 15:02:40 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:25.010 15:02:40 -- common/autotest_common.sh@875 -- # return 0 00:15:25.010 15:02:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:25.010 15:02:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:25.010 15:02:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:25.268 /dev/nbd1 00:15:25.268 15:02:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:25.268 15:02:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:25.268 15:02:40 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:15:25.268 15:02:40 -- common/autotest_common.sh@855 -- # local i 00:15:25.268 15:02:40 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:25.268 15:02:40 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:25.268 15:02:40 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:15:25.268 15:02:40 -- common/autotest_common.sh@859 -- # break 00:15:25.268 15:02:40 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:25.268 15:02:40 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:25.268 15:02:40 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:25.268 1+0 records in 00:15:25.268 1+0 records out 00:15:25.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378778 s, 10.8 MB/s 00:15:25.268 15:02:40 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:25.268 15:02:40 -- common/autotest_common.sh@872 -- # size=4096 00:15:25.268 15:02:40 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:25.268 15:02:40 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:25.268 15:02:40 -- common/autotest_common.sh@875 -- # return 0 00:15:25.268 15:02:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:25.268 15:02:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:25.268 15:02:40 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:25.268 15:02:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:25.268 15:02:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:25.526 { 00:15:25.526 "bdev_name": "Malloc0", 00:15:25.526 "nbd_device": "/dev/nbd0" 00:15:25.526 }, 00:15:25.526 { 00:15:25.526 "bdev_name": "Malloc1", 00:15:25.526 "nbd_device": "/dev/nbd1" 00:15:25.526 } 00:15:25.526 ]' 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:25.526 { 00:15:25.526 "bdev_name": "Malloc0", 00:15:25.526 "nbd_device": "/dev/nbd0" 00:15:25.526 }, 00:15:25.526 { 00:15:25.526 "bdev_name": "Malloc1", 00:15:25.526 "nbd_device": "/dev/nbd1" 00:15:25.526 } 00:15:25.526 ]' 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:25.526 /dev/nbd1' 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:25.526 /dev/nbd1' 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@65 -- # count=2 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@66 -- # echo 2 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@95 -- # count=2 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:25.526 15:02:41 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:25.784 256+0 records in 00:15:25.784 256+0 records out 00:15:25.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119038 s, 88.1 MB/s 00:15:25.784 15:02:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:25.784 15:02:41 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:25.784 256+0 records in 00:15:25.784 256+0 records out 00:15:25.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023851 s, 44.0 MB/s 00:15:25.784 15:02:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:25.784 15:02:41 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:25.784 256+0 records in 00:15:25.784 256+0 records out 00:15:25.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290712 s, 36.1 MB/s 00:15:25.784 15:02:41 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:25.784 15:02:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@51 -- # local i 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.785 15:02:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@41 -- # break 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@41 -- # break 00:15:26.043 15:02:41 -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.301 15:02:41 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:26.301 15:02:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:26.301 15:02:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:26.301 15:02:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:26.301 15:02:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:26.301 15:02:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:26.301 15:02:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:26.301 15:02:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:26.301 15:02:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:26.301 15:02:42 -- bdev/nbd_common.sh@65 -- # true 00:15:26.301 15:02:42 -- bdev/nbd_common.sh@65 -- # count=0 00:15:26.301 15:02:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:26.301 15:02:42 -- bdev/nbd_common.sh@104 -- # count=0 00:15:26.301 15:02:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:26.301 15:02:42 -- bdev/nbd_common.sh@109 -- # return 0 00:15:26.301 15:02:42 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:26.868 15:02:42 -- event/event.sh@35 -- # sleep 3 00:15:26.868 [2024-04-18 15:02:42.520132] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:27.127 [2024-04-18 15:02:42.615413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.127 [2024-04-18 15:02:42.615414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.127 [2024-04-18 15:02:42.663846] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:27.127 [2024-04-18 15:02:42.663913] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:29.687 spdk_app_start Round 2 00:15:29.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:29.687 15:02:45 -- event/event.sh@23 -- # for i in {0..2} 00:15:29.687 15:02:45 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:15:29.687 15:02:45 -- event/event.sh@25 -- # waitforlisten 61762 /var/tmp/spdk-nbd.sock 00:15:29.687 15:02:45 -- common/autotest_common.sh@817 -- # '[' -z 61762 ']' 00:15:29.687 15:02:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:29.687 15:02:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:29.688 15:02:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:29.688 15:02:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:29.688 15:02:45 -- common/autotest_common.sh@10 -- # set +x 00:15:29.946 15:02:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:29.946 15:02:45 -- common/autotest_common.sh@850 -- # return 0 00:15:29.946 15:02:45 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:30.205 Malloc0 00:15:30.205 15:02:45 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:30.463 Malloc1 00:15:30.463 15:02:46 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:30.463 15:02:46 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:30.463 15:02:46 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:30.463 15:02:46 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:30.463 15:02:46 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:30.463 15:02:46 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:30.463 15:02:46 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:30.463 15:02:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:30.463 15:02:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:30.463 15:02:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:30.463 15:02:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:30.463 15:02:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:30.463 15:02:46 -- bdev/nbd_common.sh@12 -- # local i 00:15:30.463 15:02:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:30.463 15:02:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:30.463 15:02:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:30.723 /dev/nbd0 00:15:30.723 15:02:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:30.723 15:02:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:30.723 15:02:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:15:30.723 15:02:46 -- common/autotest_common.sh@855 -- # local i 00:15:30.723 15:02:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:30.723 15:02:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:30.723 15:02:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:15:30.723 15:02:46 -- common/autotest_common.sh@859 -- # break 00:15:30.723 15:02:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:30.723 15:02:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:30.723 15:02:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:30.723 1+0 records in 00:15:30.723 1+0 records out 00:15:30.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491498 s, 8.3 MB/s 00:15:30.723 15:02:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:30.723 15:02:46 -- common/autotest_common.sh@872 -- # size=4096 00:15:30.723 15:02:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:30.723 15:02:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:30.723 15:02:46 -- common/autotest_common.sh@875 -- # return 0 00:15:30.723 15:02:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:30.723 15:02:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:30.723 15:02:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:30.982 /dev/nbd1 00:15:30.982 15:02:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:30.982 15:02:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:30.982 15:02:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:15:30.982 15:02:46 -- common/autotest_common.sh@855 -- # local i 00:15:30.982 15:02:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:30.982 15:02:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:30.982 15:02:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:15:30.982 15:02:46 -- common/autotest_common.sh@859 -- # break 00:15:30.982 15:02:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:30.982 15:02:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:30.982 15:02:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:30.982 1+0 records in 00:15:30.982 1+0 records out 00:15:30.982 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531514 s, 7.7 MB/s 00:15:30.982 15:02:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:30.982 15:02:46 -- common/autotest_common.sh@872 -- # size=4096 00:15:30.982 15:02:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:30.982 15:02:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:30.982 15:02:46 -- common/autotest_common.sh@875 -- # return 0 00:15:30.982 15:02:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:30.982 15:02:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:30.982 15:02:46 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:30.982 15:02:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:30.982 15:02:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:31.242 { 00:15:31.242 "bdev_name": "Malloc0", 00:15:31.242 "nbd_device": "/dev/nbd0" 00:15:31.242 }, 00:15:31.242 { 00:15:31.242 "bdev_name": "Malloc1", 00:15:31.242 "nbd_device": "/dev/nbd1" 00:15:31.242 } 00:15:31.242 ]' 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:31.242 { 00:15:31.242 "bdev_name": "Malloc0", 00:15:31.242 "nbd_device": "/dev/nbd0" 00:15:31.242 }, 00:15:31.242 { 00:15:31.242 "bdev_name": "Malloc1", 00:15:31.242 "nbd_device": "/dev/nbd1" 00:15:31.242 } 00:15:31.242 ]' 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:31.242 /dev/nbd1' 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:31.242 /dev/nbd1' 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@65 -- # count=2 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@95 -- # count=2 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:31.242 256+0 records in 00:15:31.242 256+0 records out 00:15:31.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115368 s, 90.9 MB/s 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:31.242 256+0 records in 00:15:31.242 256+0 records out 00:15:31.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248131 s, 42.3 MB/s 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:31.242 15:02:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:31.502 256+0 records in 00:15:31.502 256+0 records out 00:15:31.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255418 s, 41.1 MB/s 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@51 -- # local i 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.502 15:02:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:31.502 15:02:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:31.761 15:02:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:31.761 15:02:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:31.761 15:02:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.761 15:02:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.761 15:02:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:31.761 15:02:47 -- bdev/nbd_common.sh@41 -- # break 00:15:31.761 15:02:47 -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.761 15:02:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.761 15:02:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:31.762 15:02:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:31.762 15:02:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:31.762 15:02:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:31.762 15:02:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.762 15:02:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.762 15:02:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:31.762 15:02:47 -- bdev/nbd_common.sh@41 -- # break 00:15:31.762 15:02:47 -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.762 15:02:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:31.762 15:02:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.762 15:02:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:32.021 15:02:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:32.021 15:02:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:32.021 15:02:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:32.021 15:02:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:32.021 15:02:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:32.021 15:02:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:32.021 15:02:47 -- bdev/nbd_common.sh@65 -- # true 00:15:32.021 15:02:47 -- bdev/nbd_common.sh@65 -- # count=0 00:15:32.021 15:02:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:32.021 15:02:47 -- bdev/nbd_common.sh@104 -- # count=0 00:15:32.021 15:02:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:32.021 15:02:47 -- bdev/nbd_common.sh@109 -- # return 0 00:15:32.021 15:02:47 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:32.279 15:02:47 -- event/event.sh@35 -- # sleep 3 00:15:32.538 [2024-04-18 15:02:48.176899] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:32.797 [2024-04-18 15:02:48.270561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.797 [2024-04-18 15:02:48.270602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.797 [2024-04-18 15:02:48.316476] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:32.798 [2024-04-18 15:02:48.316544] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:35.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:35.383 15:02:50 -- event/event.sh@38 -- # waitforlisten 61762 /var/tmp/spdk-nbd.sock 00:15:35.383 15:02:50 -- common/autotest_common.sh@817 -- # '[' -z 61762 ']' 00:15:35.383 15:02:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:35.383 15:02:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:35.383 15:02:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:35.383 15:02:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:35.383 15:02:50 -- common/autotest_common.sh@10 -- # set +x 00:15:35.643 15:02:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:35.643 15:02:51 -- common/autotest_common.sh@850 -- # return 0 00:15:35.643 15:02:51 -- event/event.sh@39 -- # killprocess 61762 00:15:35.643 15:02:51 -- common/autotest_common.sh@936 -- # '[' -z 61762 ']' 00:15:35.643 15:02:51 -- common/autotest_common.sh@940 -- # kill -0 61762 00:15:35.643 15:02:51 -- common/autotest_common.sh@941 -- # uname 00:15:35.643 15:02:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:35.643 15:02:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61762 00:15:35.643 killing process with pid 61762 00:15:35.643 15:02:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:35.643 15:02:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:35.643 15:02:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61762' 00:15:35.643 15:02:51 -- common/autotest_common.sh@955 -- # kill 61762 00:15:35.643 15:02:51 -- common/autotest_common.sh@960 -- # wait 61762 00:15:35.903 spdk_app_start is called in Round 0. 00:15:35.903 Shutdown signal received, stop current app iteration 00:15:35.903 Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 reinitialization... 00:15:35.903 spdk_app_start is called in Round 1. 00:15:35.903 Shutdown signal received, stop current app iteration 00:15:35.903 Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 reinitialization... 00:15:35.903 spdk_app_start is called in Round 2. 00:15:35.903 Shutdown signal received, stop current app iteration 00:15:35.903 Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 reinitialization... 00:15:35.903 spdk_app_start is called in Round 3. 00:15:35.903 Shutdown signal received, stop current app iteration 00:15:35.903 15:02:51 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:15:35.903 15:02:51 -- event/event.sh@42 -- # return 0 00:15:35.903 00:15:35.903 real 0m17.922s 00:15:35.903 user 0m38.673s 00:15:35.903 sys 0m3.502s 00:15:35.903 15:02:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:35.903 15:02:51 -- common/autotest_common.sh@10 -- # set +x 00:15:35.903 ************************************ 00:15:35.903 END TEST app_repeat 00:15:35.903 ************************************ 00:15:35.903 15:02:51 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:15:35.903 15:02:51 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:15:35.903 15:02:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:35.903 15:02:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:35.903 15:02:51 -- common/autotest_common.sh@10 -- # set +x 00:15:35.903 ************************************ 00:15:35.903 START TEST cpu_locks 00:15:35.903 ************************************ 00:15:35.903 15:02:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:15:36.161 * Looking for test storage... 00:15:36.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:36.161 15:02:51 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:15:36.161 15:02:51 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:15:36.161 15:02:51 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:15:36.161 15:02:51 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:15:36.161 15:02:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:36.161 15:02:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:36.161 15:02:51 -- common/autotest_common.sh@10 -- # set +x 00:15:36.161 ************************************ 00:15:36.161 START TEST default_locks 00:15:36.161 ************************************ 00:15:36.161 15:02:51 -- common/autotest_common.sh@1111 -- # default_locks 00:15:36.161 15:02:51 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62388 00:15:36.161 15:02:51 -- event/cpu_locks.sh@47 -- # waitforlisten 62388 00:15:36.161 15:02:51 -- common/autotest_common.sh@817 -- # '[' -z 62388 ']' 00:15:36.161 15:02:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.161 15:02:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:36.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.161 15:02:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.161 15:02:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:36.161 15:02:51 -- common/autotest_common.sh@10 -- # set +x 00:15:36.161 15:02:51 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:36.420 [2024-04-18 15:02:51.868285] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:36.420 [2024-04-18 15:02:51.868385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62388 ] 00:15:36.420 [2024-04-18 15:02:52.010659] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.420 [2024-04-18 15:02:52.108794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.355 15:02:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:37.355 15:02:52 -- common/autotest_common.sh@850 -- # return 0 00:15:37.355 15:02:52 -- event/cpu_locks.sh@49 -- # locks_exist 62388 00:15:37.355 15:02:52 -- event/cpu_locks.sh@22 -- # lslocks -p 62388 00:15:37.355 15:02:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:37.922 15:02:53 -- event/cpu_locks.sh@50 -- # killprocess 62388 00:15:37.922 15:02:53 -- common/autotest_common.sh@936 -- # '[' -z 62388 ']' 00:15:37.922 15:02:53 -- common/autotest_common.sh@940 -- # kill -0 62388 00:15:37.922 15:02:53 -- common/autotest_common.sh@941 -- # uname 00:15:37.922 15:02:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:37.922 15:02:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62388 00:15:37.922 15:02:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:37.922 15:02:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:37.922 killing process with pid 62388 00:15:37.922 15:02:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62388' 00:15:37.922 15:02:53 -- common/autotest_common.sh@955 -- # kill 62388 00:15:37.922 15:02:53 -- common/autotest_common.sh@960 -- # wait 62388 00:15:38.180 15:02:53 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62388 00:15:38.180 15:02:53 -- common/autotest_common.sh@638 -- # local es=0 00:15:38.180 15:02:53 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62388 00:15:38.180 15:02:53 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:15:38.180 15:02:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:38.180 15:02:53 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:15:38.180 15:02:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:38.180 15:02:53 -- common/autotest_common.sh@641 -- # waitforlisten 62388 00:15:38.180 15:02:53 -- common/autotest_common.sh@817 -- # '[' -z 62388 ']' 00:15:38.180 15:02:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.180 15:02:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:38.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.180 15:02:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.180 15:02:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:38.180 15:02:53 -- common/autotest_common.sh@10 -- # set +x 00:15:38.180 ERROR: process (pid: 62388) is no longer running 00:15:38.180 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62388) - No such process 00:15:38.180 15:02:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:38.180 15:02:53 -- common/autotest_common.sh@850 -- # return 1 00:15:38.180 15:02:53 -- common/autotest_common.sh@641 -- # es=1 00:15:38.180 15:02:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:38.180 15:02:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:38.180 15:02:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:38.180 15:02:53 -- event/cpu_locks.sh@54 -- # no_locks 00:15:38.180 15:02:53 -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:38.180 15:02:53 -- event/cpu_locks.sh@26 -- # local lock_files 00:15:38.180 15:02:53 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:38.180 00:15:38.180 real 0m2.046s 00:15:38.180 user 0m2.169s 00:15:38.180 sys 0m0.681s 00:15:38.180 15:02:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:38.180 ************************************ 00:15:38.180 END TEST default_locks 00:15:38.180 ************************************ 00:15:38.180 15:02:53 -- common/autotest_common.sh@10 -- # set +x 00:15:38.438 15:02:53 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:15:38.438 15:02:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:38.438 15:02:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:38.438 15:02:53 -- common/autotest_common.sh@10 -- # set +x 00:15:38.438 ************************************ 00:15:38.438 START TEST default_locks_via_rpc 00:15:38.438 ************************************ 00:15:38.438 15:02:54 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:15:38.438 15:02:54 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62457 00:15:38.438 15:02:54 -- event/cpu_locks.sh@63 -- # waitforlisten 62457 00:15:38.438 15:02:54 -- common/autotest_common.sh@817 -- # '[' -z 62457 ']' 00:15:38.438 15:02:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.438 15:02:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:38.438 15:02:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.438 15:02:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:38.438 15:02:54 -- common/autotest_common.sh@10 -- # set +x 00:15:38.438 15:02:54 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:38.438 [2024-04-18 15:02:54.064709] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:38.438 [2024-04-18 15:02:54.064812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62457 ] 00:15:38.698 [2024-04-18 15:02:54.205854] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.698 [2024-04-18 15:02:54.308587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.634 15:02:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:39.634 15:02:55 -- common/autotest_common.sh@850 -- # return 0 00:15:39.634 15:02:55 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:15:39.634 15:02:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.634 15:02:55 -- common/autotest_common.sh@10 -- # set +x 00:15:39.634 15:02:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.634 15:02:55 -- event/cpu_locks.sh@67 -- # no_locks 00:15:39.634 15:02:55 -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:39.634 15:02:55 -- event/cpu_locks.sh@26 -- # local lock_files 00:15:39.634 15:02:55 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:39.634 15:02:55 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:15:39.634 15:02:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.634 15:02:55 -- common/autotest_common.sh@10 -- # set +x 00:15:39.634 15:02:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.634 15:02:55 -- event/cpu_locks.sh@71 -- # locks_exist 62457 00:15:39.634 15:02:55 -- event/cpu_locks.sh@22 -- # lslocks -p 62457 00:15:39.634 15:02:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:39.894 15:02:55 -- event/cpu_locks.sh@73 -- # killprocess 62457 00:15:39.894 15:02:55 -- common/autotest_common.sh@936 -- # '[' -z 62457 ']' 00:15:39.894 15:02:55 -- common/autotest_common.sh@940 -- # kill -0 62457 00:15:39.894 15:02:55 -- common/autotest_common.sh@941 -- # uname 00:15:39.894 15:02:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:39.894 15:02:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62457 00:15:39.894 15:02:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:39.894 killing process with pid 62457 00:15:39.894 15:02:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:39.894 15:02:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62457' 00:15:39.894 15:02:55 -- common/autotest_common.sh@955 -- # kill 62457 00:15:39.894 15:02:55 -- common/autotest_common.sh@960 -- # wait 62457 00:15:40.462 00:15:40.462 real 0m1.861s 00:15:40.462 user 0m2.023s 00:15:40.462 sys 0m0.556s 00:15:40.462 15:02:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:40.462 15:02:55 -- common/autotest_common.sh@10 -- # set +x 00:15:40.462 ************************************ 00:15:40.462 END TEST default_locks_via_rpc 00:15:40.462 ************************************ 00:15:40.462 15:02:55 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:15:40.462 15:02:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:40.462 15:02:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:40.462 15:02:55 -- common/autotest_common.sh@10 -- # set +x 00:15:40.462 ************************************ 00:15:40.462 START TEST non_locking_app_on_locked_coremask 00:15:40.462 ************************************ 00:15:40.462 15:02:56 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:15:40.462 15:02:56 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62530 00:15:40.462 15:02:56 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:40.462 15:02:56 -- event/cpu_locks.sh@81 -- # waitforlisten 62530 /var/tmp/spdk.sock 00:15:40.462 15:02:56 -- common/autotest_common.sh@817 -- # '[' -z 62530 ']' 00:15:40.462 15:02:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.462 15:02:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:40.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.462 15:02:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.462 15:02:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:40.462 15:02:56 -- common/autotest_common.sh@10 -- # set +x 00:15:40.462 [2024-04-18 15:02:56.071670] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:40.462 [2024-04-18 15:02:56.071764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62530 ] 00:15:40.722 [2024-04-18 15:02:56.202805] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.722 [2024-04-18 15:02:56.300965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.291 15:02:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:41.291 15:02:56 -- common/autotest_common.sh@850 -- # return 0 00:15:41.291 15:02:56 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:15:41.291 15:02:56 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62558 00:15:41.291 15:02:56 -- event/cpu_locks.sh@85 -- # waitforlisten 62558 /var/tmp/spdk2.sock 00:15:41.291 15:02:56 -- common/autotest_common.sh@817 -- # '[' -z 62558 ']' 00:15:41.291 15:02:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:41.291 15:02:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:41.291 15:02:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:41.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:41.291 15:02:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:41.291 15:02:56 -- common/autotest_common.sh@10 -- # set +x 00:15:41.291 [2024-04-18 15:02:56.987911] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:41.291 [2024-04-18 15:02:56.988251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62558 ] 00:15:41.551 [2024-04-18 15:02:57.124720] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:41.551 [2024-04-18 15:02:57.124778] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.809 [2024-04-18 15:02:57.315238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.375 15:02:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:42.376 15:02:57 -- common/autotest_common.sh@850 -- # return 0 00:15:42.376 15:02:57 -- event/cpu_locks.sh@87 -- # locks_exist 62530 00:15:42.376 15:02:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:42.376 15:02:57 -- event/cpu_locks.sh@22 -- # lslocks -p 62530 00:15:43.314 15:02:58 -- event/cpu_locks.sh@89 -- # killprocess 62530 00:15:43.314 15:02:58 -- common/autotest_common.sh@936 -- # '[' -z 62530 ']' 00:15:43.314 15:02:58 -- common/autotest_common.sh@940 -- # kill -0 62530 00:15:43.314 15:02:58 -- common/autotest_common.sh@941 -- # uname 00:15:43.314 15:02:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:43.314 15:02:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62530 00:15:43.314 15:02:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:43.314 15:02:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:43.314 killing process with pid 62530 00:15:43.314 15:02:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62530' 00:15:43.314 15:02:58 -- common/autotest_common.sh@955 -- # kill 62530 00:15:43.314 15:02:58 -- common/autotest_common.sh@960 -- # wait 62530 00:15:44.250 15:02:59 -- event/cpu_locks.sh@90 -- # killprocess 62558 00:15:44.250 15:02:59 -- common/autotest_common.sh@936 -- # '[' -z 62558 ']' 00:15:44.251 15:02:59 -- common/autotest_common.sh@940 -- # kill -0 62558 00:15:44.251 15:02:59 -- common/autotest_common.sh@941 -- # uname 00:15:44.251 15:02:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:44.251 15:02:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62558 00:15:44.251 killing process with pid 62558 00:15:44.251 15:02:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:44.251 15:02:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:44.251 15:02:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62558' 00:15:44.251 15:02:59 -- common/autotest_common.sh@955 -- # kill 62558 00:15:44.251 15:02:59 -- common/autotest_common.sh@960 -- # wait 62558 00:15:44.509 00:15:44.509 real 0m4.135s 00:15:44.509 user 0m4.449s 00:15:44.509 sys 0m1.169s 00:15:44.509 15:03:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:44.509 ************************************ 00:15:44.509 END TEST non_locking_app_on_locked_coremask 00:15:44.509 ************************************ 00:15:44.509 15:03:00 -- common/autotest_common.sh@10 -- # set +x 00:15:44.509 15:03:00 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:15:44.509 15:03:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:44.509 15:03:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:44.509 15:03:00 -- common/autotest_common.sh@10 -- # set +x 00:15:44.769 ************************************ 00:15:44.769 START TEST locking_app_on_unlocked_coremask 00:15:44.769 ************************************ 00:15:44.769 15:03:00 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:15:44.769 15:03:00 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:15:44.769 15:03:00 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62641 00:15:44.769 15:03:00 -- event/cpu_locks.sh@99 -- # waitforlisten 62641 /var/tmp/spdk.sock 00:15:44.769 15:03:00 -- common/autotest_common.sh@817 -- # '[' -z 62641 ']' 00:15:44.769 15:03:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.769 15:03:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:44.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.769 15:03:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.769 15:03:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:44.769 15:03:00 -- common/autotest_common.sh@10 -- # set +x 00:15:44.769 [2024-04-18 15:03:00.330588] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:44.769 [2024-04-18 15:03:00.330695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62641 ] 00:15:44.769 [2024-04-18 15:03:00.472814] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:44.769 [2024-04-18 15:03:00.472881] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.028 [2024-04-18 15:03:00.571710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.596 15:03:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:45.597 15:03:01 -- common/autotest_common.sh@850 -- # return 0 00:15:45.597 15:03:01 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62669 00:15:45.597 15:03:01 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:15:45.597 15:03:01 -- event/cpu_locks.sh@103 -- # waitforlisten 62669 /var/tmp/spdk2.sock 00:15:45.597 15:03:01 -- common/autotest_common.sh@817 -- # '[' -z 62669 ']' 00:15:45.597 15:03:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:45.597 15:03:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:45.597 15:03:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:45.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:45.597 15:03:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:45.597 15:03:01 -- common/autotest_common.sh@10 -- # set +x 00:15:45.597 [2024-04-18 15:03:01.275246] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:45.597 [2024-04-18 15:03:01.275626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62669 ] 00:15:45.856 [2024-04-18 15:03:01.416095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.115 [2024-04-18 15:03:01.615449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.684 15:03:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:46.684 15:03:02 -- common/autotest_common.sh@850 -- # return 0 00:15:46.684 15:03:02 -- event/cpu_locks.sh@105 -- # locks_exist 62669 00:15:46.684 15:03:02 -- event/cpu_locks.sh@22 -- # lslocks -p 62669 00:15:46.684 15:03:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:47.632 15:03:03 -- event/cpu_locks.sh@107 -- # killprocess 62641 00:15:47.632 15:03:03 -- common/autotest_common.sh@936 -- # '[' -z 62641 ']' 00:15:47.632 15:03:03 -- common/autotest_common.sh@940 -- # kill -0 62641 00:15:47.632 15:03:03 -- common/autotest_common.sh@941 -- # uname 00:15:47.632 15:03:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:47.632 15:03:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62641 00:15:47.632 killing process with pid 62641 00:15:47.632 15:03:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:47.632 15:03:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:47.632 15:03:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62641' 00:15:47.632 15:03:03 -- common/autotest_common.sh@955 -- # kill 62641 00:15:47.632 15:03:03 -- common/autotest_common.sh@960 -- # wait 62641 00:15:48.578 15:03:03 -- event/cpu_locks.sh@108 -- # killprocess 62669 00:15:48.578 15:03:03 -- common/autotest_common.sh@936 -- # '[' -z 62669 ']' 00:15:48.578 15:03:03 -- common/autotest_common.sh@940 -- # kill -0 62669 00:15:48.578 15:03:03 -- common/autotest_common.sh@941 -- # uname 00:15:48.578 15:03:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:48.578 15:03:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62669 00:15:48.578 15:03:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:48.578 15:03:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:48.578 killing process with pid 62669 00:15:48.578 15:03:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62669' 00:15:48.578 15:03:03 -- common/autotest_common.sh@955 -- # kill 62669 00:15:48.578 15:03:03 -- common/autotest_common.sh@960 -- # wait 62669 00:15:48.837 00:15:48.837 real 0m4.127s 00:15:48.837 user 0m4.420s 00:15:48.837 sys 0m1.277s 00:15:48.837 15:03:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:48.837 15:03:04 -- common/autotest_common.sh@10 -- # set +x 00:15:48.837 ************************************ 00:15:48.837 END TEST locking_app_on_unlocked_coremask 00:15:48.837 ************************************ 00:15:48.837 15:03:04 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:15:48.837 15:03:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:48.837 15:03:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:48.837 15:03:04 -- common/autotest_common.sh@10 -- # set +x 00:15:48.837 ************************************ 00:15:48.837 START TEST locking_app_on_locked_coremask 00:15:48.838 ************************************ 00:15:48.838 15:03:04 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:15:48.838 15:03:04 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:48.838 15:03:04 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62752 00:15:48.838 15:03:04 -- event/cpu_locks.sh@116 -- # waitforlisten 62752 /var/tmp/spdk.sock 00:15:48.838 15:03:04 -- common/autotest_common.sh@817 -- # '[' -z 62752 ']' 00:15:48.838 15:03:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.838 15:03:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:49.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.096 15:03:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.096 15:03:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:49.096 15:03:04 -- common/autotest_common.sh@10 -- # set +x 00:15:49.096 [2024-04-18 15:03:04.603684] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:49.096 [2024-04-18 15:03:04.603775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62752 ] 00:15:49.096 [2024-04-18 15:03:04.737621] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.354 [2024-04-18 15:03:04.834971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.922 15:03:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:49.922 15:03:05 -- common/autotest_common.sh@850 -- # return 0 00:15:49.922 15:03:05 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:15:49.922 15:03:05 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62780 00:15:49.922 15:03:05 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62780 /var/tmp/spdk2.sock 00:15:49.922 15:03:05 -- common/autotest_common.sh@638 -- # local es=0 00:15:49.922 15:03:05 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62780 /var/tmp/spdk2.sock 00:15:49.922 15:03:05 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:15:49.922 15:03:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:49.922 15:03:05 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:15:49.922 15:03:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:49.922 15:03:05 -- common/autotest_common.sh@641 -- # waitforlisten 62780 /var/tmp/spdk2.sock 00:15:49.922 15:03:05 -- common/autotest_common.sh@817 -- # '[' -z 62780 ']' 00:15:49.922 15:03:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:49.922 15:03:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:49.922 15:03:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:49.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:49.922 15:03:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:49.922 15:03:05 -- common/autotest_common.sh@10 -- # set +x 00:15:49.922 [2024-04-18 15:03:05.541811] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:49.922 [2024-04-18 15:03:05.541896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62780 ] 00:15:50.181 [2024-04-18 15:03:05.679612] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62752 has claimed it. 00:15:50.181 [2024-04-18 15:03:05.679673] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:15:50.748 ERROR: process (pid: 62780) is no longer running 00:15:50.748 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62780) - No such process 00:15:50.748 15:03:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:50.748 15:03:06 -- common/autotest_common.sh@850 -- # return 1 00:15:50.748 15:03:06 -- common/autotest_common.sh@641 -- # es=1 00:15:50.748 15:03:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:50.748 15:03:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:50.748 15:03:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:50.748 15:03:06 -- event/cpu_locks.sh@122 -- # locks_exist 62752 00:15:50.748 15:03:06 -- event/cpu_locks.sh@22 -- # lslocks -p 62752 00:15:50.748 15:03:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:51.007 15:03:06 -- event/cpu_locks.sh@124 -- # killprocess 62752 00:15:51.007 15:03:06 -- common/autotest_common.sh@936 -- # '[' -z 62752 ']' 00:15:51.007 15:03:06 -- common/autotest_common.sh@940 -- # kill -0 62752 00:15:51.007 15:03:06 -- common/autotest_common.sh@941 -- # uname 00:15:51.007 15:03:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:51.007 15:03:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62752 00:15:51.268 killing process with pid 62752 00:15:51.268 15:03:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:51.268 15:03:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:51.268 15:03:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62752' 00:15:51.268 15:03:06 -- common/autotest_common.sh@955 -- # kill 62752 00:15:51.268 15:03:06 -- common/autotest_common.sh@960 -- # wait 62752 00:15:51.527 00:15:51.527 real 0m2.591s 00:15:51.527 user 0m2.868s 00:15:51.527 sys 0m0.679s 00:15:51.527 15:03:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:51.527 ************************************ 00:15:51.527 15:03:07 -- common/autotest_common.sh@10 -- # set +x 00:15:51.527 END TEST locking_app_on_locked_coremask 00:15:51.527 ************************************ 00:15:51.527 15:03:07 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:15:51.527 15:03:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:51.527 15:03:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:51.527 15:03:07 -- common/autotest_common.sh@10 -- # set +x 00:15:51.786 ************************************ 00:15:51.786 START TEST locking_overlapped_coremask 00:15:51.786 ************************************ 00:15:51.786 15:03:07 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:15:51.786 15:03:07 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62841 00:15:51.786 15:03:07 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:51.786 15:03:07 -- event/cpu_locks.sh@133 -- # waitforlisten 62841 /var/tmp/spdk.sock 00:15:51.786 15:03:07 -- common/autotest_common.sh@817 -- # '[' -z 62841 ']' 00:15:51.786 15:03:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.786 15:03:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:51.786 15:03:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.786 15:03:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:51.786 15:03:07 -- common/autotest_common.sh@10 -- # set +x 00:15:51.786 [2024-04-18 15:03:07.344404] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:51.786 [2024-04-18 15:03:07.344721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62841 ] 00:15:51.786 [2024-04-18 15:03:07.486773] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:52.045 [2024-04-18 15:03:07.583367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.045 [2024-04-18 15:03:07.583590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.045 [2024-04-18 15:03:07.583594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:52.622 15:03:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:52.622 15:03:08 -- common/autotest_common.sh@850 -- # return 0 00:15:52.622 15:03:08 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:15:52.622 15:03:08 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62871 00:15:52.622 15:03:08 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62871 /var/tmp/spdk2.sock 00:15:52.622 15:03:08 -- common/autotest_common.sh@638 -- # local es=0 00:15:52.622 15:03:08 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62871 /var/tmp/spdk2.sock 00:15:52.622 15:03:08 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:15:52.622 15:03:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:52.622 15:03:08 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:15:52.623 15:03:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:52.623 15:03:08 -- common/autotest_common.sh@641 -- # waitforlisten 62871 /var/tmp/spdk2.sock 00:15:52.623 15:03:08 -- common/autotest_common.sh@817 -- # '[' -z 62871 ']' 00:15:52.623 15:03:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:52.623 15:03:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:52.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:52.623 15:03:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:52.623 15:03:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:52.623 15:03:08 -- common/autotest_common.sh@10 -- # set +x 00:15:52.623 [2024-04-18 15:03:08.256482] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:52.623 [2024-04-18 15:03:08.256572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62871 ] 00:15:52.891 [2024-04-18 15:03:08.395703] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62841 has claimed it. 00:15:52.891 [2024-04-18 15:03:08.395777] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:15:53.460 ERROR: process (pid: 62871) is no longer running 00:15:53.460 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62871) - No such process 00:15:53.460 15:03:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:53.460 15:03:08 -- common/autotest_common.sh@850 -- # return 1 00:15:53.460 15:03:08 -- common/autotest_common.sh@641 -- # es=1 00:15:53.460 15:03:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:53.460 15:03:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:53.460 15:03:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:53.460 15:03:08 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:15:53.460 15:03:08 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:15:53.460 15:03:08 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:15:53.460 15:03:08 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:15:53.460 15:03:08 -- event/cpu_locks.sh@141 -- # killprocess 62841 00:15:53.460 15:03:08 -- common/autotest_common.sh@936 -- # '[' -z 62841 ']' 00:15:53.460 15:03:08 -- common/autotest_common.sh@940 -- # kill -0 62841 00:15:53.460 15:03:08 -- common/autotest_common.sh@941 -- # uname 00:15:53.460 15:03:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:53.460 15:03:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62841 00:15:53.460 15:03:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:53.460 15:03:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:53.460 15:03:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62841' 00:15:53.460 killing process with pid 62841 00:15:53.460 15:03:08 -- common/autotest_common.sh@955 -- # kill 62841 00:15:53.460 15:03:08 -- common/autotest_common.sh@960 -- # wait 62841 00:15:53.719 00:15:53.719 real 0m2.134s 00:15:53.719 user 0m5.724s 00:15:53.719 sys 0m0.484s 00:15:53.719 15:03:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:53.719 15:03:09 -- common/autotest_common.sh@10 -- # set +x 00:15:53.719 ************************************ 00:15:53.719 END TEST locking_overlapped_coremask 00:15:53.719 ************************************ 00:15:53.978 15:03:09 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:15:53.979 15:03:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:53.979 15:03:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:53.979 15:03:09 -- common/autotest_common.sh@10 -- # set +x 00:15:53.979 ************************************ 00:15:53.979 START TEST locking_overlapped_coremask_via_rpc 00:15:53.979 ************************************ 00:15:53.979 15:03:09 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:15:53.979 15:03:09 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62921 00:15:53.979 15:03:09 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:15:53.979 15:03:09 -- event/cpu_locks.sh@149 -- # waitforlisten 62921 /var/tmp/spdk.sock 00:15:53.979 15:03:09 -- common/autotest_common.sh@817 -- # '[' -z 62921 ']' 00:15:53.979 15:03:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.979 15:03:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:53.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.979 15:03:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.979 15:03:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:53.979 15:03:09 -- common/autotest_common.sh@10 -- # set +x 00:15:53.979 [2024-04-18 15:03:09.627836] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:53.979 [2024-04-18 15:03:09.627918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62921 ] 00:15:54.237 [2024-04-18 15:03:09.771814] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:54.237 [2024-04-18 15:03:09.771874] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:54.237 [2024-04-18 15:03:09.864795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.237 [2024-04-18 15:03:09.864960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.237 [2024-04-18 15:03:09.864967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.175 15:03:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:55.175 15:03:10 -- common/autotest_common.sh@850 -- # return 0 00:15:55.175 15:03:10 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:15:55.175 15:03:10 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62951 00:15:55.175 15:03:10 -- event/cpu_locks.sh@153 -- # waitforlisten 62951 /var/tmp/spdk2.sock 00:15:55.175 15:03:10 -- common/autotest_common.sh@817 -- # '[' -z 62951 ']' 00:15:55.175 15:03:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:55.175 15:03:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:55.175 15:03:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:55.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:55.175 15:03:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:55.175 15:03:10 -- common/autotest_common.sh@10 -- # set +x 00:15:55.175 [2024-04-18 15:03:10.586972] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:55.175 [2024-04-18 15:03:10.587277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62951 ] 00:15:55.175 [2024-04-18 15:03:10.730643] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:55.175 [2024-04-18 15:03:10.730686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:55.434 [2024-04-18 15:03:10.920568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.434 [2024-04-18 15:03:10.920646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.434 [2024-04-18 15:03:10.920650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:56.002 15:03:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:56.002 15:03:11 -- common/autotest_common.sh@850 -- # return 0 00:15:56.002 15:03:11 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:15:56.002 15:03:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:56.002 15:03:11 -- common/autotest_common.sh@10 -- # set +x 00:15:56.002 15:03:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:56.002 15:03:11 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:56.002 15:03:11 -- common/autotest_common.sh@638 -- # local es=0 00:15:56.002 15:03:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:56.002 15:03:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:15:56.002 15:03:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:56.002 15:03:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:15:56.002 15:03:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:56.002 15:03:11 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:56.002 15:03:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:56.002 15:03:11 -- common/autotest_common.sh@10 -- # set +x 00:15:56.002 [2024-04-18 15:03:11.476647] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62921 has claimed it. 00:15:56.002 2024/04/18 15:03:11 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:15:56.002 request: 00:15:56.002 { 00:15:56.002 "method": "framework_enable_cpumask_locks", 00:15:56.002 "params": {} 00:15:56.002 } 00:15:56.002 Got JSON-RPC error response 00:15:56.002 GoRPCClient: error on JSON-RPC call 00:15:56.002 15:03:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:56.002 15:03:11 -- common/autotest_common.sh@641 -- # es=1 00:15:56.002 15:03:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:56.002 15:03:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:56.002 15:03:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:56.002 15:03:11 -- event/cpu_locks.sh@158 -- # waitforlisten 62921 /var/tmp/spdk.sock 00:15:56.002 15:03:11 -- common/autotest_common.sh@817 -- # '[' -z 62921 ']' 00:15:56.002 15:03:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.002 15:03:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:56.002 15:03:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.002 15:03:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:56.002 15:03:11 -- common/autotest_common.sh@10 -- # set +x 00:15:56.002 15:03:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:56.002 15:03:11 -- common/autotest_common.sh@850 -- # return 0 00:15:56.002 15:03:11 -- event/cpu_locks.sh@159 -- # waitforlisten 62951 /var/tmp/spdk2.sock 00:15:56.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:56.002 15:03:11 -- common/autotest_common.sh@817 -- # '[' -z 62951 ']' 00:15:56.002 15:03:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:56.002 15:03:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:56.002 15:03:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:56.002 15:03:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:56.002 15:03:11 -- common/autotest_common.sh@10 -- # set +x 00:15:56.261 15:03:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:56.261 15:03:11 -- common/autotest_common.sh@850 -- # return 0 00:15:56.261 15:03:11 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:15:56.261 15:03:11 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:15:56.261 15:03:11 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:15:56.261 ************************************ 00:15:56.261 END TEST locking_overlapped_coremask_via_rpc 00:15:56.261 ************************************ 00:15:56.261 15:03:11 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:15:56.261 00:15:56.261 real 0m2.361s 00:15:56.261 user 0m1.026s 00:15:56.261 sys 0m0.261s 00:15:56.261 15:03:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:56.261 15:03:11 -- common/autotest_common.sh@10 -- # set +x 00:15:56.520 15:03:11 -- event/cpu_locks.sh@174 -- # cleanup 00:15:56.520 15:03:11 -- event/cpu_locks.sh@15 -- # [[ -z 62921 ]] 00:15:56.520 15:03:11 -- event/cpu_locks.sh@15 -- # killprocess 62921 00:15:56.520 15:03:11 -- common/autotest_common.sh@936 -- # '[' -z 62921 ']' 00:15:56.520 15:03:11 -- common/autotest_common.sh@940 -- # kill -0 62921 00:15:56.520 15:03:11 -- common/autotest_common.sh@941 -- # uname 00:15:56.520 15:03:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:56.520 15:03:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62921 00:15:56.520 killing process with pid 62921 00:15:56.520 15:03:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:56.520 15:03:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:56.520 15:03:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62921' 00:15:56.520 15:03:12 -- common/autotest_common.sh@955 -- # kill 62921 00:15:56.520 15:03:12 -- common/autotest_common.sh@960 -- # wait 62921 00:15:56.779 15:03:12 -- event/cpu_locks.sh@16 -- # [[ -z 62951 ]] 00:15:56.779 15:03:12 -- event/cpu_locks.sh@16 -- # killprocess 62951 00:15:56.779 15:03:12 -- common/autotest_common.sh@936 -- # '[' -z 62951 ']' 00:15:56.779 15:03:12 -- common/autotest_common.sh@940 -- # kill -0 62951 00:15:56.779 15:03:12 -- common/autotest_common.sh@941 -- # uname 00:15:56.779 15:03:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:56.779 15:03:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62951 00:15:56.779 killing process with pid 62951 00:15:56.779 15:03:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:56.779 15:03:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:56.779 15:03:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62951' 00:15:56.779 15:03:12 -- common/autotest_common.sh@955 -- # kill 62951 00:15:56.779 15:03:12 -- common/autotest_common.sh@960 -- # wait 62951 00:15:57.347 15:03:12 -- event/cpu_locks.sh@18 -- # rm -f 00:15:57.347 Process with pid 62921 is not found 00:15:57.347 Process with pid 62951 is not found 00:15:57.347 15:03:12 -- event/cpu_locks.sh@1 -- # cleanup 00:15:57.347 15:03:12 -- event/cpu_locks.sh@15 -- # [[ -z 62921 ]] 00:15:57.347 15:03:12 -- event/cpu_locks.sh@15 -- # killprocess 62921 00:15:57.347 15:03:12 -- common/autotest_common.sh@936 -- # '[' -z 62921 ']' 00:15:57.347 15:03:12 -- common/autotest_common.sh@940 -- # kill -0 62921 00:15:57.347 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (62921) - No such process 00:15:57.347 15:03:12 -- common/autotest_common.sh@963 -- # echo 'Process with pid 62921 is not found' 00:15:57.347 15:03:12 -- event/cpu_locks.sh@16 -- # [[ -z 62951 ]] 00:15:57.347 15:03:12 -- event/cpu_locks.sh@16 -- # killprocess 62951 00:15:57.347 15:03:12 -- common/autotest_common.sh@936 -- # '[' -z 62951 ']' 00:15:57.347 15:03:12 -- common/autotest_common.sh@940 -- # kill -0 62951 00:15:57.347 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (62951) - No such process 00:15:57.347 15:03:12 -- common/autotest_common.sh@963 -- # echo 'Process with pid 62951 is not found' 00:15:57.347 15:03:12 -- event/cpu_locks.sh@18 -- # rm -f 00:15:57.347 00:15:57.347 real 0m21.246s 00:15:57.347 user 0m34.334s 00:15:57.347 sys 0m6.295s 00:15:57.347 15:03:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:57.347 15:03:12 -- common/autotest_common.sh@10 -- # set +x 00:15:57.347 ************************************ 00:15:57.347 END TEST cpu_locks 00:15:57.347 ************************************ 00:15:57.347 00:15:57.347 real 0m47.462s 00:15:57.347 user 1m25.245s 00:15:57.347 sys 0m11.051s 00:15:57.347 15:03:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:57.347 15:03:12 -- common/autotest_common.sh@10 -- # set +x 00:15:57.347 ************************************ 00:15:57.347 END TEST event 00:15:57.347 ************************************ 00:15:57.347 15:03:12 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:15:57.347 15:03:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:57.347 15:03:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:57.347 15:03:12 -- common/autotest_common.sh@10 -- # set +x 00:15:57.347 ************************************ 00:15:57.347 START TEST thread 00:15:57.347 ************************************ 00:15:57.347 15:03:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:15:57.613 * Looking for test storage... 00:15:57.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:15:57.613 15:03:13 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:57.613 15:03:13 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:15:57.613 15:03:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:57.613 15:03:13 -- common/autotest_common.sh@10 -- # set +x 00:15:57.613 ************************************ 00:15:57.613 START TEST thread_poller_perf 00:15:57.613 ************************************ 00:15:57.613 15:03:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:57.613 [2024-04-18 15:03:13.314090] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:57.613 [2024-04-18 15:03:13.314191] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63108 ] 00:15:57.875 [2024-04-18 15:03:13.471352] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.875 [2024-04-18 15:03:13.565542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.875 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:15:59.276 ====================================== 00:15:59.276 busy:2502165700 (cyc) 00:15:59.276 total_run_count: 386000 00:15:59.276 tsc_hz: 2490000000 (cyc) 00:15:59.276 ====================================== 00:15:59.276 poller_cost: 6482 (cyc), 2603 (nsec) 00:15:59.276 00:15:59.276 real 0m1.382s 00:15:59.276 ************************************ 00:15:59.276 END TEST thread_poller_perf 00:15:59.276 ************************************ 00:15:59.276 user 0m1.204s 00:15:59.276 sys 0m0.058s 00:15:59.276 15:03:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:59.276 15:03:14 -- common/autotest_common.sh@10 -- # set +x 00:15:59.276 15:03:14 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:59.276 15:03:14 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:15:59.276 15:03:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:59.276 15:03:14 -- common/autotest_common.sh@10 -- # set +x 00:15:59.276 ************************************ 00:15:59.276 START TEST thread_poller_perf 00:15:59.276 ************************************ 00:15:59.276 15:03:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:59.276 [2024-04-18 15:03:14.836224] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:15:59.276 [2024-04-18 15:03:14.836343] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63149 ] 00:15:59.535 [2024-04-18 15:03:14.981941] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.535 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:15:59.535 [2024-04-18 15:03:15.074747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.471 ====================================== 00:16:00.471 busy:2493243092 (cyc) 00:16:00.471 total_run_count: 5084000 00:16:00.471 tsc_hz: 2490000000 (cyc) 00:16:00.471 ====================================== 00:16:00.471 poller_cost: 490 (cyc), 196 (nsec) 00:16:00.471 00:16:00.471 real 0m1.360s 00:16:00.471 user 0m1.199s 00:16:00.471 sys 0m0.054s 00:16:00.471 15:03:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:00.471 ************************************ 00:16:00.471 END TEST thread_poller_perf 00:16:00.471 15:03:16 -- common/autotest_common.sh@10 -- # set +x 00:16:00.471 ************************************ 00:16:00.730 15:03:16 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:16:00.730 ************************************ 00:16:00.730 END TEST thread 00:16:00.730 ************************************ 00:16:00.730 00:16:00.730 real 0m3.181s 00:16:00.730 user 0m2.563s 00:16:00.730 sys 0m0.367s 00:16:00.730 15:03:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:00.730 15:03:16 -- common/autotest_common.sh@10 -- # set +x 00:16:00.730 15:03:16 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:00.730 15:03:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:00.730 15:03:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:00.730 15:03:16 -- common/autotest_common.sh@10 -- # set +x 00:16:00.730 ************************************ 00:16:00.730 START TEST accel 00:16:00.730 ************************************ 00:16:00.730 15:03:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:00.990 * Looking for test storage... 00:16:00.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:16:00.990 15:03:16 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:16:00.990 15:03:16 -- accel/accel.sh@82 -- # get_expected_opcs 00:16:00.990 15:03:16 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:00.990 15:03:16 -- accel/accel.sh@62 -- # spdk_tgt_pid=63233 00:16:00.990 15:03:16 -- accel/accel.sh@63 -- # waitforlisten 63233 00:16:00.990 15:03:16 -- common/autotest_common.sh@817 -- # '[' -z 63233 ']' 00:16:00.990 15:03:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.990 15:03:16 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:16:00.990 15:03:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:00.990 15:03:16 -- accel/accel.sh@61 -- # build_accel_config 00:16:00.990 15:03:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.990 15:03:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:00.990 15:03:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:00.990 15:03:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:00.990 15:03:16 -- common/autotest_common.sh@10 -- # set +x 00:16:00.990 15:03:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:00.990 15:03:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:00.990 15:03:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:00.990 15:03:16 -- accel/accel.sh@40 -- # local IFS=, 00:16:00.990 15:03:16 -- accel/accel.sh@41 -- # jq -r . 00:16:00.990 [2024-04-18 15:03:16.593201] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:00.990 [2024-04-18 15:03:16.593298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63233 ] 00:16:01.249 [2024-04-18 15:03:16.743499] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.249 [2024-04-18 15:03:16.851175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.817 15:03:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:01.817 15:03:17 -- common/autotest_common.sh@850 -- # return 0 00:16:01.817 15:03:17 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:16:01.817 15:03:17 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:16:01.817 15:03:17 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:16:01.817 15:03:17 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:16:01.817 15:03:17 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:16:01.817 15:03:17 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:16:01.817 15:03:17 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:16:01.817 15:03:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:01.817 15:03:17 -- common/autotest_common.sh@10 -- # set +x 00:16:02.077 15:03:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:02.077 15:03:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # IFS== 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # read -r opc module 00:16:02.077 15:03:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:02.077 15:03:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # IFS== 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # read -r opc module 00:16:02.077 15:03:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:02.077 15:03:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # IFS== 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # read -r opc module 00:16:02.077 15:03:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:02.077 15:03:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # IFS== 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # read -r opc module 00:16:02.077 15:03:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:02.077 15:03:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # IFS== 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # read -r opc module 00:16:02.077 15:03:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:02.077 15:03:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # IFS== 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # read -r opc module 00:16:02.077 15:03:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:02.077 15:03:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # IFS== 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # read -r opc module 00:16:02.077 15:03:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:02.077 15:03:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # IFS== 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # read -r opc module 00:16:02.077 15:03:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:02.077 15:03:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # IFS== 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # read -r opc module 00:16:02.077 15:03:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:02.077 15:03:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # IFS== 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # read -r opc module 00:16:02.077 15:03:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:02.077 15:03:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # IFS== 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # read -r opc module 00:16:02.077 15:03:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:02.077 15:03:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # IFS== 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # read -r opc module 00:16:02.077 15:03:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:02.077 15:03:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # IFS== 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # read -r opc module 00:16:02.077 15:03:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:02.077 15:03:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # IFS== 00:16:02.077 15:03:17 -- accel/accel.sh@72 -- # read -r opc module 00:16:02.077 15:03:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:02.077 15:03:17 -- accel/accel.sh@75 -- # killprocess 63233 00:16:02.077 15:03:17 -- common/autotest_common.sh@936 -- # '[' -z 63233 ']' 00:16:02.077 15:03:17 -- common/autotest_common.sh@940 -- # kill -0 63233 00:16:02.077 15:03:17 -- common/autotest_common.sh@941 -- # uname 00:16:02.077 15:03:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:02.077 15:03:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63233 00:16:02.077 15:03:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:02.077 15:03:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:02.077 killing process with pid 63233 00:16:02.077 15:03:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63233' 00:16:02.077 15:03:17 -- common/autotest_common.sh@955 -- # kill 63233 00:16:02.077 15:03:17 -- common/autotest_common.sh@960 -- # wait 63233 00:16:02.336 15:03:17 -- accel/accel.sh@76 -- # trap - ERR 00:16:02.336 15:03:17 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:16:02.336 15:03:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:02.336 15:03:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:02.336 15:03:17 -- common/autotest_common.sh@10 -- # set +x 00:16:02.596 15:03:18 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:16:02.596 15:03:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:16:02.596 15:03:18 -- accel/accel.sh@12 -- # build_accel_config 00:16:02.596 15:03:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:02.596 15:03:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:02.596 15:03:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:02.596 15:03:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:02.596 15:03:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:02.596 15:03:18 -- accel/accel.sh@40 -- # local IFS=, 00:16:02.596 15:03:18 -- accel/accel.sh@41 -- # jq -r . 00:16:02.596 15:03:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:02.596 15:03:18 -- common/autotest_common.sh@10 -- # set +x 00:16:02.596 15:03:18 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:16:02.596 15:03:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:02.596 15:03:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:02.596 15:03:18 -- common/autotest_common.sh@10 -- # set +x 00:16:02.596 ************************************ 00:16:02.596 START TEST accel_missing_filename 00:16:02.596 ************************************ 00:16:02.596 15:03:18 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:16:02.596 15:03:18 -- common/autotest_common.sh@638 -- # local es=0 00:16:02.596 15:03:18 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:16:02.596 15:03:18 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:02.596 15:03:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:02.596 15:03:18 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:02.596 15:03:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:02.596 15:03:18 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:16:02.596 15:03:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:16:02.596 15:03:18 -- accel/accel.sh@12 -- # build_accel_config 00:16:02.596 15:03:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:02.596 15:03:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:02.596 15:03:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:02.596 15:03:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:02.596 15:03:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:02.596 15:03:18 -- accel/accel.sh@40 -- # local IFS=, 00:16:02.596 15:03:18 -- accel/accel.sh@41 -- # jq -r . 00:16:02.596 [2024-04-18 15:03:18.258439] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:02.596 [2024-04-18 15:03:18.258522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63312 ] 00:16:02.855 [2024-04-18 15:03:18.395223] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.855 [2024-04-18 15:03:18.486369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.855 [2024-04-18 15:03:18.529469] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:03.115 [2024-04-18 15:03:18.590326] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:16:03.115 A filename is required. 00:16:03.115 15:03:18 -- common/autotest_common.sh@641 -- # es=234 00:16:03.115 15:03:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:03.115 15:03:18 -- common/autotest_common.sh@650 -- # es=106 00:16:03.115 15:03:18 -- common/autotest_common.sh@651 -- # case "$es" in 00:16:03.115 15:03:18 -- common/autotest_common.sh@658 -- # es=1 00:16:03.115 15:03:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:03.115 00:16:03.115 real 0m0.462s 00:16:03.115 user 0m0.297s 00:16:03.115 sys 0m0.100s 00:16:03.115 15:03:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:03.115 15:03:18 -- common/autotest_common.sh@10 -- # set +x 00:16:03.115 ************************************ 00:16:03.115 END TEST accel_missing_filename 00:16:03.115 ************************************ 00:16:03.115 15:03:18 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:03.115 15:03:18 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:16:03.115 15:03:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:03.115 15:03:18 -- common/autotest_common.sh@10 -- # set +x 00:16:03.373 ************************************ 00:16:03.373 START TEST accel_compress_verify 00:16:03.373 ************************************ 00:16:03.373 15:03:18 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:03.373 15:03:18 -- common/autotest_common.sh@638 -- # local es=0 00:16:03.373 15:03:18 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:03.373 15:03:18 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:03.373 15:03:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:03.373 15:03:18 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:03.373 15:03:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:03.373 15:03:18 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:03.373 15:03:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:03.373 15:03:18 -- accel/accel.sh@12 -- # build_accel_config 00:16:03.373 15:03:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:03.373 15:03:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:03.373 15:03:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:03.373 15:03:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:03.373 15:03:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:03.373 15:03:18 -- accel/accel.sh@40 -- # local IFS=, 00:16:03.373 15:03:18 -- accel/accel.sh@41 -- # jq -r . 00:16:03.373 [2024-04-18 15:03:18.877876] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:03.373 [2024-04-18 15:03:18.877963] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63340 ] 00:16:03.373 [2024-04-18 15:03:19.019410] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.648 [2024-04-18 15:03:19.110606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.648 [2024-04-18 15:03:19.155906] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:03.648 [2024-04-18 15:03:19.216465] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:16:03.648 00:16:03.648 Compression does not support the verify option, aborting. 00:16:03.648 15:03:19 -- common/autotest_common.sh@641 -- # es=161 00:16:03.648 15:03:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:03.649 15:03:19 -- common/autotest_common.sh@650 -- # es=33 00:16:03.649 15:03:19 -- common/autotest_common.sh@651 -- # case "$es" in 00:16:03.649 15:03:19 -- common/autotest_common.sh@658 -- # es=1 00:16:03.649 15:03:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:03.649 00:16:03.649 real 0m0.472s 00:16:03.649 user 0m0.306s 00:16:03.649 sys 0m0.105s 00:16:03.649 15:03:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:03.649 15:03:19 -- common/autotest_common.sh@10 -- # set +x 00:16:03.649 ************************************ 00:16:03.649 END TEST accel_compress_verify 00:16:03.649 ************************************ 00:16:03.908 15:03:19 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:16:03.908 15:03:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:03.908 15:03:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:03.908 15:03:19 -- common/autotest_common.sh@10 -- # set +x 00:16:03.908 ************************************ 00:16:03.908 START TEST accel_wrong_workload 00:16:03.908 ************************************ 00:16:03.908 15:03:19 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:16:03.908 15:03:19 -- common/autotest_common.sh@638 -- # local es=0 00:16:03.908 15:03:19 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:16:03.908 15:03:19 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:03.908 15:03:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:03.908 15:03:19 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:03.908 15:03:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:03.908 15:03:19 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:16:03.908 15:03:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:16:03.908 15:03:19 -- accel/accel.sh@12 -- # build_accel_config 00:16:03.908 15:03:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:03.908 15:03:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:03.908 15:03:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:03.908 15:03:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:03.908 15:03:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:03.908 15:03:19 -- accel/accel.sh@40 -- # local IFS=, 00:16:03.908 15:03:19 -- accel/accel.sh@41 -- # jq -r . 00:16:03.908 Unsupported workload type: foobar 00:16:03.908 [2024-04-18 15:03:19.476135] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:16:03.908 accel_perf options: 00:16:03.908 [-h help message] 00:16:03.908 [-q queue depth per core] 00:16:03.908 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:03.908 [-T number of threads per core 00:16:03.908 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:03.908 [-t time in seconds] 00:16:03.908 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:03.908 [ dif_verify, , dif_generate, dif_generate_copy 00:16:03.908 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:03.908 [-l for compress/decompress workloads, name of uncompressed input file 00:16:03.908 [-S for crc32c workload, use this seed value (default 0) 00:16:03.908 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:03.908 [-f for fill workload, use this BYTE value (default 255) 00:16:03.908 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:03.908 [-y verify result if this switch is on] 00:16:03.908 [-a tasks to allocate per core (default: same value as -q)] 00:16:03.908 Can be used to spread operations across a wider range of memory. 00:16:03.908 15:03:19 -- common/autotest_common.sh@641 -- # es=1 00:16:03.908 15:03:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:03.908 15:03:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:03.908 15:03:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:03.909 00:16:03.909 real 0m0.027s 00:16:03.909 user 0m0.012s 00:16:03.909 sys 0m0.015s 00:16:03.909 15:03:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:03.909 ************************************ 00:16:03.909 END TEST accel_wrong_workload 00:16:03.909 ************************************ 00:16:03.909 15:03:19 -- common/autotest_common.sh@10 -- # set +x 00:16:03.909 15:03:19 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:16:03.909 15:03:19 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:16:03.909 15:03:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:03.909 15:03:19 -- common/autotest_common.sh@10 -- # set +x 00:16:04.168 ************************************ 00:16:04.168 START TEST accel_negative_buffers 00:16:04.168 ************************************ 00:16:04.168 15:03:19 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:16:04.168 15:03:19 -- common/autotest_common.sh@638 -- # local es=0 00:16:04.168 15:03:19 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:16:04.168 15:03:19 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:04.168 15:03:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:04.168 15:03:19 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:04.168 15:03:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:04.168 15:03:19 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:16:04.168 15:03:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:16:04.168 15:03:19 -- accel/accel.sh@12 -- # build_accel_config 00:16:04.168 15:03:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:04.168 15:03:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:04.168 15:03:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:04.168 15:03:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:04.168 15:03:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:04.168 15:03:19 -- accel/accel.sh@40 -- # local IFS=, 00:16:04.168 15:03:19 -- accel/accel.sh@41 -- # jq -r . 00:16:04.168 -x option must be non-negative. 00:16:04.168 [2024-04-18 15:03:19.650041] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:16:04.168 accel_perf options: 00:16:04.168 [-h help message] 00:16:04.168 [-q queue depth per core] 00:16:04.168 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:04.168 [-T number of threads per core 00:16:04.168 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:04.168 [-t time in seconds] 00:16:04.168 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:04.168 [ dif_verify, , dif_generate, dif_generate_copy 00:16:04.168 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:04.168 [-l for compress/decompress workloads, name of uncompressed input file 00:16:04.168 [-S for crc32c workload, use this seed value (default 0) 00:16:04.168 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:04.168 [-f for fill workload, use this BYTE value (default 255) 00:16:04.168 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:04.168 [-y verify result if this switch is on] 00:16:04.168 [-a tasks to allocate per core (default: same value as -q)] 00:16:04.168 Can be used to spread operations across a wider range of memory. 00:16:04.168 15:03:19 -- common/autotest_common.sh@641 -- # es=1 00:16:04.168 15:03:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:04.168 15:03:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:04.168 15:03:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:04.168 00:16:04.168 real 0m0.039s 00:16:04.168 user 0m0.024s 00:16:04.168 sys 0m0.015s 00:16:04.168 ************************************ 00:16:04.168 END TEST accel_negative_buffers 00:16:04.168 15:03:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:04.168 15:03:19 -- common/autotest_common.sh@10 -- # set +x 00:16:04.168 ************************************ 00:16:04.168 15:03:19 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:16:04.168 15:03:19 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:04.168 15:03:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:04.168 15:03:19 -- common/autotest_common.sh@10 -- # set +x 00:16:04.168 ************************************ 00:16:04.168 START TEST accel_crc32c 00:16:04.168 ************************************ 00:16:04.168 15:03:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:16:04.168 15:03:19 -- accel/accel.sh@16 -- # local accel_opc 00:16:04.168 15:03:19 -- accel/accel.sh@17 -- # local accel_module 00:16:04.168 15:03:19 -- accel/accel.sh@19 -- # IFS=: 00:16:04.168 15:03:19 -- accel/accel.sh@19 -- # read -r var val 00:16:04.168 15:03:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:16:04.168 15:03:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:16:04.168 15:03:19 -- accel/accel.sh@12 -- # build_accel_config 00:16:04.168 15:03:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:04.168 15:03:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:04.168 15:03:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:04.168 15:03:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:04.168 15:03:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:04.168 15:03:19 -- accel/accel.sh@40 -- # local IFS=, 00:16:04.168 15:03:19 -- accel/accel.sh@41 -- # jq -r . 00:16:04.168 [2024-04-18 15:03:19.832205] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:04.168 [2024-04-18 15:03:19.832278] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63416 ] 00:16:04.428 [2024-04-18 15:03:19.976403] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.428 [2024-04-18 15:03:20.078458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.428 15:03:20 -- accel/accel.sh@20 -- # val= 00:16:04.428 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.428 15:03:20 -- accel/accel.sh@20 -- # val= 00:16:04.428 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.428 15:03:20 -- accel/accel.sh@20 -- # val=0x1 00:16:04.428 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.428 15:03:20 -- accel/accel.sh@20 -- # val= 00:16:04.428 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.428 15:03:20 -- accel/accel.sh@20 -- # val= 00:16:04.428 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.428 15:03:20 -- accel/accel.sh@20 -- # val=crc32c 00:16:04.428 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.428 15:03:20 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.428 15:03:20 -- accel/accel.sh@20 -- # val=32 00:16:04.428 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.428 15:03:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:04.428 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.428 15:03:20 -- accel/accel.sh@20 -- # val= 00:16:04.428 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.428 15:03:20 -- accel/accel.sh@20 -- # val=software 00:16:04.428 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.428 15:03:20 -- accel/accel.sh@22 -- # accel_module=software 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.428 15:03:20 -- accel/accel.sh@20 -- # val=32 00:16:04.428 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.428 15:03:20 -- accel/accel.sh@20 -- # val=32 00:16:04.428 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.428 15:03:20 -- accel/accel.sh@20 -- # val=1 00:16:04.428 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.428 15:03:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:04.428 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.428 15:03:20 -- accel/accel.sh@20 -- # val=Yes 00:16:04.428 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.428 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.687 15:03:20 -- accel/accel.sh@20 -- # val= 00:16:04.687 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.687 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.687 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:04.687 15:03:20 -- accel/accel.sh@20 -- # val= 00:16:04.687 15:03:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:04.687 15:03:20 -- accel/accel.sh@19 -- # IFS=: 00:16:04.687 15:03:20 -- accel/accel.sh@19 -- # read -r var val 00:16:05.625 15:03:21 -- accel/accel.sh@20 -- # val= 00:16:05.625 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:05.625 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:05.625 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:05.625 15:03:21 -- accel/accel.sh@20 -- # val= 00:16:05.625 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:05.625 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:05.625 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:05.625 15:03:21 -- accel/accel.sh@20 -- # val= 00:16:05.625 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:05.625 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:05.625 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:05.625 15:03:21 -- accel/accel.sh@20 -- # val= 00:16:05.625 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:05.625 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:05.625 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:05.625 15:03:21 -- accel/accel.sh@20 -- # val= 00:16:05.625 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:05.625 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:05.625 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:05.625 15:03:21 -- accel/accel.sh@20 -- # val= 00:16:05.625 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:05.625 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:05.625 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:05.625 ************************************ 00:16:05.625 END TEST accel_crc32c 00:16:05.625 ************************************ 00:16:05.625 15:03:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:05.625 15:03:21 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:05.625 15:03:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:05.625 00:16:05.625 real 0m1.487s 00:16:05.625 user 0m1.286s 00:16:05.625 sys 0m0.104s 00:16:05.625 15:03:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:05.625 15:03:21 -- common/autotest_common.sh@10 -- # set +x 00:16:05.885 15:03:21 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:16:05.885 15:03:21 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:05.885 15:03:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:05.885 15:03:21 -- common/autotest_common.sh@10 -- # set +x 00:16:05.885 ************************************ 00:16:05.885 START TEST accel_crc32c_C2 00:16:05.885 ************************************ 00:16:05.885 15:03:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:16:05.885 15:03:21 -- accel/accel.sh@16 -- # local accel_opc 00:16:05.885 15:03:21 -- accel/accel.sh@17 -- # local accel_module 00:16:05.885 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:05.885 15:03:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:16:05.885 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:05.885 15:03:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:16:05.885 15:03:21 -- accel/accel.sh@12 -- # build_accel_config 00:16:05.885 15:03:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:05.885 15:03:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:05.885 15:03:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:05.885 15:03:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:05.885 15:03:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:05.885 15:03:21 -- accel/accel.sh@40 -- # local IFS=, 00:16:05.885 15:03:21 -- accel/accel.sh@41 -- # jq -r . 00:16:05.885 [2024-04-18 15:03:21.456462] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:05.885 [2024-04-18 15:03:21.456531] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63457 ] 00:16:06.144 [2024-04-18 15:03:21.598812] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.144 [2024-04-18 15:03:21.693953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.144 15:03:21 -- accel/accel.sh@20 -- # val= 00:16:06.144 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.144 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.144 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.144 15:03:21 -- accel/accel.sh@20 -- # val= 00:16:06.144 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.144 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.144 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.144 15:03:21 -- accel/accel.sh@20 -- # val=0x1 00:16:06.144 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.144 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.144 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.144 15:03:21 -- accel/accel.sh@20 -- # val= 00:16:06.144 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.144 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.144 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.144 15:03:21 -- accel/accel.sh@20 -- # val= 00:16:06.144 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.144 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.144 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.144 15:03:21 -- accel/accel.sh@20 -- # val=crc32c 00:16:06.144 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.144 15:03:21 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:06.144 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.144 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.144 15:03:21 -- accel/accel.sh@20 -- # val=0 00:16:06.144 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.144 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.144 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.145 15:03:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:06.145 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.145 15:03:21 -- accel/accel.sh@20 -- # val= 00:16:06.145 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.145 15:03:21 -- accel/accel.sh@20 -- # val=software 00:16:06.145 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.145 15:03:21 -- accel/accel.sh@22 -- # accel_module=software 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.145 15:03:21 -- accel/accel.sh@20 -- # val=32 00:16:06.145 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.145 15:03:21 -- accel/accel.sh@20 -- # val=32 00:16:06.145 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.145 15:03:21 -- accel/accel.sh@20 -- # val=1 00:16:06.145 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.145 15:03:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:06.145 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.145 15:03:21 -- accel/accel.sh@20 -- # val=Yes 00:16:06.145 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.145 15:03:21 -- accel/accel.sh@20 -- # val= 00:16:06.145 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:06.145 15:03:21 -- accel/accel.sh@20 -- # val= 00:16:06.145 15:03:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # IFS=: 00:16:06.145 15:03:21 -- accel/accel.sh@19 -- # read -r var val 00:16:07.523 15:03:22 -- accel/accel.sh@20 -- # val= 00:16:07.523 15:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.523 15:03:22 -- accel/accel.sh@19 -- # IFS=: 00:16:07.523 15:03:22 -- accel/accel.sh@19 -- # read -r var val 00:16:07.523 15:03:22 -- accel/accel.sh@20 -- # val= 00:16:07.523 15:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.523 15:03:22 -- accel/accel.sh@19 -- # IFS=: 00:16:07.523 15:03:22 -- accel/accel.sh@19 -- # read -r var val 00:16:07.523 15:03:22 -- accel/accel.sh@20 -- # val= 00:16:07.523 15:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.523 15:03:22 -- accel/accel.sh@19 -- # IFS=: 00:16:07.523 15:03:22 -- accel/accel.sh@19 -- # read -r var val 00:16:07.523 ************************************ 00:16:07.523 END TEST accel_crc32c_C2 00:16:07.523 ************************************ 00:16:07.523 15:03:22 -- accel/accel.sh@20 -- # val= 00:16:07.523 15:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.523 15:03:22 -- accel/accel.sh@19 -- # IFS=: 00:16:07.523 15:03:22 -- accel/accel.sh@19 -- # read -r var val 00:16:07.523 15:03:22 -- accel/accel.sh@20 -- # val= 00:16:07.523 15:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.523 15:03:22 -- accel/accel.sh@19 -- # IFS=: 00:16:07.523 15:03:22 -- accel/accel.sh@19 -- # read -r var val 00:16:07.523 15:03:22 -- accel/accel.sh@20 -- # val= 00:16:07.523 15:03:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.523 15:03:22 -- accel/accel.sh@19 -- # IFS=: 00:16:07.523 15:03:22 -- accel/accel.sh@19 -- # read -r var val 00:16:07.523 15:03:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:07.523 15:03:22 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:07.523 15:03:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:07.523 00:16:07.523 real 0m1.470s 00:16:07.523 user 0m1.283s 00:16:07.523 sys 0m0.095s 00:16:07.523 15:03:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:07.523 15:03:22 -- common/autotest_common.sh@10 -- # set +x 00:16:07.523 15:03:22 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:16:07.523 15:03:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:07.523 15:03:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:07.523 15:03:22 -- common/autotest_common.sh@10 -- # set +x 00:16:07.523 ************************************ 00:16:07.523 START TEST accel_copy 00:16:07.523 ************************************ 00:16:07.523 15:03:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:16:07.523 15:03:23 -- accel/accel.sh@16 -- # local accel_opc 00:16:07.523 15:03:23 -- accel/accel.sh@17 -- # local accel_module 00:16:07.523 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.523 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.523 15:03:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:16:07.523 15:03:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:16:07.523 15:03:23 -- accel/accel.sh@12 -- # build_accel_config 00:16:07.523 15:03:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:07.523 15:03:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:07.523 15:03:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:07.523 15:03:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:07.523 15:03:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:07.523 15:03:23 -- accel/accel.sh@40 -- # local IFS=, 00:16:07.523 15:03:23 -- accel/accel.sh@41 -- # jq -r . 00:16:07.523 [2024-04-18 15:03:23.101123] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:07.524 [2024-04-18 15:03:23.101328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63496 ] 00:16:07.782 [2024-04-18 15:03:23.244514] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.782 [2024-04-18 15:03:23.344466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.782 15:03:23 -- accel/accel.sh@20 -- # val= 00:16:07.782 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.782 15:03:23 -- accel/accel.sh@20 -- # val= 00:16:07.782 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.782 15:03:23 -- accel/accel.sh@20 -- # val=0x1 00:16:07.782 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.782 15:03:23 -- accel/accel.sh@20 -- # val= 00:16:07.782 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.782 15:03:23 -- accel/accel.sh@20 -- # val= 00:16:07.782 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.782 15:03:23 -- accel/accel.sh@20 -- # val=copy 00:16:07.782 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.782 15:03:23 -- accel/accel.sh@23 -- # accel_opc=copy 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.782 15:03:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:07.782 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.782 15:03:23 -- accel/accel.sh@20 -- # val= 00:16:07.782 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.782 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.782 15:03:23 -- accel/accel.sh@20 -- # val=software 00:16:07.782 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.783 15:03:23 -- accel/accel.sh@22 -- # accel_module=software 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.783 15:03:23 -- accel/accel.sh@20 -- # val=32 00:16:07.783 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.783 15:03:23 -- accel/accel.sh@20 -- # val=32 00:16:07.783 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.783 15:03:23 -- accel/accel.sh@20 -- # val=1 00:16:07.783 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.783 15:03:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:07.783 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.783 15:03:23 -- accel/accel.sh@20 -- # val=Yes 00:16:07.783 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.783 15:03:23 -- accel/accel.sh@20 -- # val= 00:16:07.783 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:07.783 15:03:23 -- accel/accel.sh@20 -- # val= 00:16:07.783 15:03:23 -- accel/accel.sh@21 -- # case "$var" in 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # IFS=: 00:16:07.783 15:03:23 -- accel/accel.sh@19 -- # read -r var val 00:16:09.178 15:03:24 -- accel/accel.sh@20 -- # val= 00:16:09.178 15:03:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.178 15:03:24 -- accel/accel.sh@19 -- # IFS=: 00:16:09.178 15:03:24 -- accel/accel.sh@19 -- # read -r var val 00:16:09.178 15:03:24 -- accel/accel.sh@20 -- # val= 00:16:09.178 15:03:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.178 15:03:24 -- accel/accel.sh@19 -- # IFS=: 00:16:09.178 15:03:24 -- accel/accel.sh@19 -- # read -r var val 00:16:09.178 15:03:24 -- accel/accel.sh@20 -- # val= 00:16:09.178 15:03:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.178 15:03:24 -- accel/accel.sh@19 -- # IFS=: 00:16:09.178 15:03:24 -- accel/accel.sh@19 -- # read -r var val 00:16:09.178 15:03:24 -- accel/accel.sh@20 -- # val= 00:16:09.178 15:03:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.178 15:03:24 -- accel/accel.sh@19 -- # IFS=: 00:16:09.178 15:03:24 -- accel/accel.sh@19 -- # read -r var val 00:16:09.178 15:03:24 -- accel/accel.sh@20 -- # val= 00:16:09.178 15:03:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.178 15:03:24 -- accel/accel.sh@19 -- # IFS=: 00:16:09.178 15:03:24 -- accel/accel.sh@19 -- # read -r var val 00:16:09.178 15:03:24 -- accel/accel.sh@20 -- # val= 00:16:09.178 15:03:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.178 15:03:24 -- accel/accel.sh@19 -- # IFS=: 00:16:09.178 15:03:24 -- accel/accel.sh@19 -- # read -r var val 00:16:09.178 15:03:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:09.178 15:03:24 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:16:09.178 15:03:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:09.178 00:16:09.178 real 0m1.488s 00:16:09.178 user 0m1.283s 00:16:09.178 sys 0m0.113s 00:16:09.178 15:03:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:09.178 15:03:24 -- common/autotest_common.sh@10 -- # set +x 00:16:09.178 ************************************ 00:16:09.178 END TEST accel_copy 00:16:09.178 ************************************ 00:16:09.178 15:03:24 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:09.178 15:03:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:16:09.178 15:03:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:09.178 15:03:24 -- common/autotest_common.sh@10 -- # set +x 00:16:09.178 ************************************ 00:16:09.178 START TEST accel_fill 00:16:09.178 ************************************ 00:16:09.178 15:03:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:09.178 15:03:24 -- accel/accel.sh@16 -- # local accel_opc 00:16:09.178 15:03:24 -- accel/accel.sh@17 -- # local accel_module 00:16:09.178 15:03:24 -- accel/accel.sh@19 -- # IFS=: 00:16:09.178 15:03:24 -- accel/accel.sh@19 -- # read -r var val 00:16:09.178 15:03:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:09.178 15:03:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:09.178 15:03:24 -- accel/accel.sh@12 -- # build_accel_config 00:16:09.178 15:03:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:09.178 15:03:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:09.178 15:03:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:09.178 15:03:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:09.178 15:03:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:09.178 15:03:24 -- accel/accel.sh@40 -- # local IFS=, 00:16:09.178 15:03:24 -- accel/accel.sh@41 -- # jq -r . 00:16:09.178 [2024-04-18 15:03:24.769054] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:09.178 [2024-04-18 15:03:24.769139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63535 ] 00:16:09.438 [2024-04-18 15:03:24.913531] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.438 [2024-04-18 15:03:25.003371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val= 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val= 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val=0x1 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val= 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val= 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val=fill 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@23 -- # accel_opc=fill 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val=0x80 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val= 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val=software 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@22 -- # accel_module=software 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val=64 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val=64 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val=1 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val=Yes 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val= 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:09.438 15:03:25 -- accel/accel.sh@20 -- # val= 00:16:09.438 15:03:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # IFS=: 00:16:09.438 15:03:25 -- accel/accel.sh@19 -- # read -r var val 00:16:10.818 15:03:26 -- accel/accel.sh@20 -- # val= 00:16:10.818 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:10.818 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:10.818 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:10.818 15:03:26 -- accel/accel.sh@20 -- # val= 00:16:10.818 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:10.818 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:10.818 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:10.818 15:03:26 -- accel/accel.sh@20 -- # val= 00:16:10.818 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:10.818 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:10.818 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:10.818 15:03:26 -- accel/accel.sh@20 -- # val= 00:16:10.818 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:10.818 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:10.818 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:10.818 15:03:26 -- accel/accel.sh@20 -- # val= 00:16:10.818 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:10.818 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:10.818 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:10.818 15:03:26 -- accel/accel.sh@20 -- # val= 00:16:10.818 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:10.818 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:10.818 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:10.818 15:03:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:10.818 15:03:26 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:16:10.818 15:03:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:10.818 00:16:10.818 real 0m1.478s 00:16:10.818 user 0m1.279s 00:16:10.818 sys 0m0.110s 00:16:10.818 15:03:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:10.818 15:03:26 -- common/autotest_common.sh@10 -- # set +x 00:16:10.818 ************************************ 00:16:10.818 END TEST accel_fill 00:16:10.818 ************************************ 00:16:10.818 15:03:26 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:16:10.818 15:03:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:10.818 15:03:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:10.818 15:03:26 -- common/autotest_common.sh@10 -- # set +x 00:16:10.818 ************************************ 00:16:10.818 START TEST accel_copy_crc32c 00:16:10.818 ************************************ 00:16:10.818 15:03:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:16:10.818 15:03:26 -- accel/accel.sh@16 -- # local accel_opc 00:16:10.818 15:03:26 -- accel/accel.sh@17 -- # local accel_module 00:16:10.818 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:10.818 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:10.818 15:03:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:16:10.818 15:03:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:16:10.818 15:03:26 -- accel/accel.sh@12 -- # build_accel_config 00:16:10.818 15:03:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:10.818 15:03:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:10.818 15:03:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:10.818 15:03:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:10.818 15:03:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:10.818 15:03:26 -- accel/accel.sh@40 -- # local IFS=, 00:16:10.818 15:03:26 -- accel/accel.sh@41 -- # jq -r . 00:16:10.818 [2024-04-18 15:03:26.399829] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:10.818 [2024-04-18 15:03:26.399922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63580 ] 00:16:11.078 [2024-04-18 15:03:26.540308] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.078 [2024-04-18 15:03:26.635970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val= 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val= 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val=0x1 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val= 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val= 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val=copy_crc32c 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val=0 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val= 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val=software 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@22 -- # accel_module=software 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val=32 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val=32 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val=1 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val=Yes 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val= 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:11.078 15:03:26 -- accel/accel.sh@20 -- # val= 00:16:11.078 15:03:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # IFS=: 00:16:11.078 15:03:26 -- accel/accel.sh@19 -- # read -r var val 00:16:12.461 15:03:27 -- accel/accel.sh@20 -- # val= 00:16:12.461 15:03:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.461 15:03:27 -- accel/accel.sh@19 -- # IFS=: 00:16:12.461 15:03:27 -- accel/accel.sh@19 -- # read -r var val 00:16:12.461 15:03:27 -- accel/accel.sh@20 -- # val= 00:16:12.461 15:03:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.461 15:03:27 -- accel/accel.sh@19 -- # IFS=: 00:16:12.461 15:03:27 -- accel/accel.sh@19 -- # read -r var val 00:16:12.461 15:03:27 -- accel/accel.sh@20 -- # val= 00:16:12.461 15:03:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.461 15:03:27 -- accel/accel.sh@19 -- # IFS=: 00:16:12.461 15:03:27 -- accel/accel.sh@19 -- # read -r var val 00:16:12.461 15:03:27 -- accel/accel.sh@20 -- # val= 00:16:12.461 15:03:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.461 15:03:27 -- accel/accel.sh@19 -- # IFS=: 00:16:12.461 15:03:27 -- accel/accel.sh@19 -- # read -r var val 00:16:12.461 15:03:27 -- accel/accel.sh@20 -- # val= 00:16:12.461 15:03:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.461 15:03:27 -- accel/accel.sh@19 -- # IFS=: 00:16:12.461 15:03:27 -- accel/accel.sh@19 -- # read -r var val 00:16:12.461 15:03:27 -- accel/accel.sh@20 -- # val= 00:16:12.461 15:03:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.461 15:03:27 -- accel/accel.sh@19 -- # IFS=: 00:16:12.461 15:03:27 -- accel/accel.sh@19 -- # read -r var val 00:16:12.461 15:03:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:12.461 15:03:27 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:16:12.461 15:03:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:12.461 00:16:12.461 real 0m1.480s 00:16:12.461 user 0m1.279s 00:16:12.461 sys 0m0.116s 00:16:12.461 ************************************ 00:16:12.461 END TEST accel_copy_crc32c 00:16:12.461 ************************************ 00:16:12.461 15:03:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:12.461 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:16:12.461 15:03:27 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:16:12.461 15:03:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:12.461 15:03:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:12.461 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:16:12.461 ************************************ 00:16:12.461 START TEST accel_copy_crc32c_C2 00:16:12.461 ************************************ 00:16:12.461 15:03:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:16:12.461 15:03:28 -- accel/accel.sh@16 -- # local accel_opc 00:16:12.461 15:03:28 -- accel/accel.sh@17 -- # local accel_module 00:16:12.461 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.461 15:03:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:16:12.461 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.461 15:03:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:16:12.461 15:03:28 -- accel/accel.sh@12 -- # build_accel_config 00:16:12.461 15:03:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:12.461 15:03:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:12.461 15:03:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:12.461 15:03:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:12.461 15:03:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:12.461 15:03:28 -- accel/accel.sh@40 -- # local IFS=, 00:16:12.461 15:03:28 -- accel/accel.sh@41 -- # jq -r . 00:16:12.461 [2024-04-18 15:03:28.039794] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:12.461 [2024-04-18 15:03:28.039888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63614 ] 00:16:12.721 [2024-04-18 15:03:28.173487] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.721 [2024-04-18 15:03:28.266447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val= 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val= 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val=0x1 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val= 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val= 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val=copy_crc32c 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val=0 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val='8192 bytes' 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val= 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val=software 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@22 -- # accel_module=software 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val=32 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val=32 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val=1 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val=Yes 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val= 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:12.721 15:03:28 -- accel/accel.sh@20 -- # val= 00:16:12.721 15:03:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # IFS=: 00:16:12.721 15:03:28 -- accel/accel.sh@19 -- # read -r var val 00:16:14.113 15:03:29 -- accel/accel.sh@20 -- # val= 00:16:14.113 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.113 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.113 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.113 15:03:29 -- accel/accel.sh@20 -- # val= 00:16:14.113 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.113 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.113 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.113 15:03:29 -- accel/accel.sh@20 -- # val= 00:16:14.113 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.113 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.113 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.113 15:03:29 -- accel/accel.sh@20 -- # val= 00:16:14.113 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.113 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.113 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.113 15:03:29 -- accel/accel.sh@20 -- # val= 00:16:14.113 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.113 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.113 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.113 15:03:29 -- accel/accel.sh@20 -- # val= 00:16:14.113 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.113 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.113 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.113 15:03:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:14.113 15:03:29 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:16:14.113 15:03:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:14.113 ************************************ 00:16:14.113 END TEST accel_copy_crc32c_C2 00:16:14.113 ************************************ 00:16:14.113 00:16:14.113 real 0m1.461s 00:16:14.113 user 0m1.265s 00:16:14.113 sys 0m0.103s 00:16:14.113 15:03:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:14.113 15:03:29 -- common/autotest_common.sh@10 -- # set +x 00:16:14.113 15:03:29 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:16:14.113 15:03:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:14.113 15:03:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:14.113 15:03:29 -- common/autotest_common.sh@10 -- # set +x 00:16:14.113 ************************************ 00:16:14.113 START TEST accel_dualcast 00:16:14.113 ************************************ 00:16:14.113 15:03:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:16:14.113 15:03:29 -- accel/accel.sh@16 -- # local accel_opc 00:16:14.113 15:03:29 -- accel/accel.sh@17 -- # local accel_module 00:16:14.113 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.113 15:03:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:16:14.113 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.113 15:03:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:16:14.113 15:03:29 -- accel/accel.sh@12 -- # build_accel_config 00:16:14.113 15:03:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:14.113 15:03:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:14.113 15:03:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:14.113 15:03:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:14.113 15:03:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:14.113 15:03:29 -- accel/accel.sh@40 -- # local IFS=, 00:16:14.113 15:03:29 -- accel/accel.sh@41 -- # jq -r . 00:16:14.113 [2024-04-18 15:03:29.645911] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:14.113 [2024-04-18 15:03:29.645972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63658 ] 00:16:14.113 [2024-04-18 15:03:29.785514] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.373 [2024-04-18 15:03:29.876850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.373 15:03:29 -- accel/accel.sh@20 -- # val= 00:16:14.373 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.373 15:03:29 -- accel/accel.sh@20 -- # val= 00:16:14.373 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.373 15:03:29 -- accel/accel.sh@20 -- # val=0x1 00:16:14.373 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.373 15:03:29 -- accel/accel.sh@20 -- # val= 00:16:14.373 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.373 15:03:29 -- accel/accel.sh@20 -- # val= 00:16:14.373 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.373 15:03:29 -- accel/accel.sh@20 -- # val=dualcast 00:16:14.373 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.373 15:03:29 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.373 15:03:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:14.373 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.373 15:03:29 -- accel/accel.sh@20 -- # val= 00:16:14.373 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.373 15:03:29 -- accel/accel.sh@20 -- # val=software 00:16:14.373 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.373 15:03:29 -- accel/accel.sh@22 -- # accel_module=software 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.373 15:03:29 -- accel/accel.sh@20 -- # val=32 00:16:14.373 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.373 15:03:29 -- accel/accel.sh@20 -- # val=32 00:16:14.373 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.373 15:03:29 -- accel/accel.sh@20 -- # val=1 00:16:14.373 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.373 15:03:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:14.373 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.373 15:03:29 -- accel/accel.sh@20 -- # val=Yes 00:16:14.373 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.373 15:03:29 -- accel/accel.sh@20 -- # val= 00:16:14.373 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.373 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:14.374 15:03:29 -- accel/accel.sh@20 -- # val= 00:16:14.374 15:03:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.374 15:03:29 -- accel/accel.sh@19 -- # IFS=: 00:16:14.374 15:03:29 -- accel/accel.sh@19 -- # read -r var val 00:16:15.753 15:03:31 -- accel/accel.sh@20 -- # val= 00:16:15.753 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.753 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:15.753 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:15.753 15:03:31 -- accel/accel.sh@20 -- # val= 00:16:15.753 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.753 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:15.753 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:15.753 15:03:31 -- accel/accel.sh@20 -- # val= 00:16:15.753 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.753 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:15.753 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:15.753 15:03:31 -- accel/accel.sh@20 -- # val= 00:16:15.753 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.753 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:15.753 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:15.753 15:03:31 -- accel/accel.sh@20 -- # val= 00:16:15.753 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.753 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:15.753 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:15.753 15:03:31 -- accel/accel.sh@20 -- # val= 00:16:15.753 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.753 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:15.753 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:15.753 15:03:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:15.753 15:03:31 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:16:15.753 15:03:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:15.753 00:16:15.753 real 0m1.461s 00:16:15.753 user 0m1.277s 00:16:15.753 sys 0m0.093s 00:16:15.753 15:03:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:15.753 ************************************ 00:16:15.753 END TEST accel_dualcast 00:16:15.753 ************************************ 00:16:15.753 15:03:31 -- common/autotest_common.sh@10 -- # set +x 00:16:15.753 15:03:31 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:16:15.753 15:03:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:15.753 15:03:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:15.753 15:03:31 -- common/autotest_common.sh@10 -- # set +x 00:16:15.753 ************************************ 00:16:15.753 START TEST accel_compare 00:16:15.753 ************************************ 00:16:15.753 15:03:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:16:15.753 15:03:31 -- accel/accel.sh@16 -- # local accel_opc 00:16:15.753 15:03:31 -- accel/accel.sh@17 -- # local accel_module 00:16:15.753 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:15.753 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:15.753 15:03:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:16:15.753 15:03:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:16:15.753 15:03:31 -- accel/accel.sh@12 -- # build_accel_config 00:16:15.753 15:03:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:15.753 15:03:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:15.753 15:03:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:15.753 15:03:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:15.753 15:03:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:15.753 15:03:31 -- accel/accel.sh@40 -- # local IFS=, 00:16:15.753 15:03:31 -- accel/accel.sh@41 -- # jq -r . 00:16:15.753 [2024-04-18 15:03:31.263895] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:15.753 [2024-04-18 15:03:31.263969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63691 ] 00:16:15.753 [2024-04-18 15:03:31.396503] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.012 [2024-04-18 15:03:31.489727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val= 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val= 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val=0x1 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val= 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val= 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val=compare 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@23 -- # accel_opc=compare 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val= 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val=software 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@22 -- # accel_module=software 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val=32 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val=32 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val=1 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val=Yes 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val= 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:16.012 15:03:31 -- accel/accel.sh@20 -- # val= 00:16:16.012 15:03:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # IFS=: 00:16:16.012 15:03:31 -- accel/accel.sh@19 -- # read -r var val 00:16:17.438 15:03:32 -- accel/accel.sh@20 -- # val= 00:16:17.438 15:03:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.438 15:03:32 -- accel/accel.sh@19 -- # IFS=: 00:16:17.438 15:03:32 -- accel/accel.sh@19 -- # read -r var val 00:16:17.438 15:03:32 -- accel/accel.sh@20 -- # val= 00:16:17.438 15:03:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.438 15:03:32 -- accel/accel.sh@19 -- # IFS=: 00:16:17.438 15:03:32 -- accel/accel.sh@19 -- # read -r var val 00:16:17.438 15:03:32 -- accel/accel.sh@20 -- # val= 00:16:17.438 15:03:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.438 15:03:32 -- accel/accel.sh@19 -- # IFS=: 00:16:17.438 15:03:32 -- accel/accel.sh@19 -- # read -r var val 00:16:17.438 15:03:32 -- accel/accel.sh@20 -- # val= 00:16:17.438 15:03:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.438 15:03:32 -- accel/accel.sh@19 -- # IFS=: 00:16:17.438 15:03:32 -- accel/accel.sh@19 -- # read -r var val 00:16:17.438 15:03:32 -- accel/accel.sh@20 -- # val= 00:16:17.438 15:03:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.438 15:03:32 -- accel/accel.sh@19 -- # IFS=: 00:16:17.438 15:03:32 -- accel/accel.sh@19 -- # read -r var val 00:16:17.438 15:03:32 -- accel/accel.sh@20 -- # val= 00:16:17.438 15:03:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.438 15:03:32 -- accel/accel.sh@19 -- # IFS=: 00:16:17.438 15:03:32 -- accel/accel.sh@19 -- # read -r var val 00:16:17.438 15:03:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:17.438 15:03:32 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:16:17.438 15:03:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:17.438 00:16:17.438 real 0m1.465s 00:16:17.438 user 0m1.271s 00:16:17.438 sys 0m0.102s 00:16:17.438 15:03:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:17.438 ************************************ 00:16:17.438 END TEST accel_compare 00:16:17.438 ************************************ 00:16:17.438 15:03:32 -- common/autotest_common.sh@10 -- # set +x 00:16:17.438 15:03:32 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:16:17.438 15:03:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:17.438 15:03:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:17.438 15:03:32 -- common/autotest_common.sh@10 -- # set +x 00:16:17.438 ************************************ 00:16:17.438 START TEST accel_xor 00:16:17.438 ************************************ 00:16:17.438 15:03:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:16:17.438 15:03:32 -- accel/accel.sh@16 -- # local accel_opc 00:16:17.438 15:03:32 -- accel/accel.sh@17 -- # local accel_module 00:16:17.438 15:03:32 -- accel/accel.sh@19 -- # IFS=: 00:16:17.438 15:03:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:16:17.438 15:03:32 -- accel/accel.sh@19 -- # read -r var val 00:16:17.438 15:03:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:16:17.438 15:03:32 -- accel/accel.sh@12 -- # build_accel_config 00:16:17.438 15:03:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:17.438 15:03:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:17.438 15:03:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:17.438 15:03:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:17.438 15:03:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:17.438 15:03:32 -- accel/accel.sh@40 -- # local IFS=, 00:16:17.438 15:03:32 -- accel/accel.sh@41 -- # jq -r . 00:16:17.438 [2024-04-18 15:03:32.886179] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:17.438 [2024-04-18 15:03:32.886400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63735 ] 00:16:17.438 [2024-04-18 15:03:33.027424] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.438 [2024-04-18 15:03:33.115176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val= 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val= 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val=0x1 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val= 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val= 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val=xor 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@23 -- # accel_opc=xor 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val=2 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val= 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val=software 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@22 -- # accel_module=software 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val=32 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val=32 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val=1 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val=Yes 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val= 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:17.698 15:03:33 -- accel/accel.sh@20 -- # val= 00:16:17.698 15:03:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # IFS=: 00:16:17.698 15:03:33 -- accel/accel.sh@19 -- # read -r var val 00:16:18.637 15:03:34 -- accel/accel.sh@20 -- # val= 00:16:18.637 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.637 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:18.637 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:18.637 15:03:34 -- accel/accel.sh@20 -- # val= 00:16:18.637 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.637 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:18.637 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:18.637 15:03:34 -- accel/accel.sh@20 -- # val= 00:16:18.637 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.637 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:18.637 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:18.637 15:03:34 -- accel/accel.sh@20 -- # val= 00:16:18.637 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.637 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:18.637 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:18.637 15:03:34 -- accel/accel.sh@20 -- # val= 00:16:18.637 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.637 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:18.637 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:18.637 15:03:34 -- accel/accel.sh@20 -- # val= 00:16:18.637 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.637 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:18.637 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:18.637 15:03:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:18.637 15:03:34 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:16:18.637 15:03:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:18.637 00:16:18.637 real 0m1.466s 00:16:18.637 user 0m1.275s 00:16:18.637 sys 0m0.101s 00:16:18.637 15:03:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:18.637 15:03:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.637 ************************************ 00:16:18.637 END TEST accel_xor 00:16:18.637 ************************************ 00:16:18.896 15:03:34 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:16:18.896 15:03:34 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:18.896 15:03:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:18.896 15:03:34 -- common/autotest_common.sh@10 -- # set +x 00:16:18.896 ************************************ 00:16:18.896 START TEST accel_xor 00:16:18.896 ************************************ 00:16:18.896 15:03:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:16:18.896 15:03:34 -- accel/accel.sh@16 -- # local accel_opc 00:16:18.896 15:03:34 -- accel/accel.sh@17 -- # local accel_module 00:16:18.896 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:18.896 15:03:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:16:18.896 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:18.896 15:03:34 -- accel/accel.sh@12 -- # build_accel_config 00:16:18.896 15:03:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:18.896 15:03:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:18.896 15:03:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:16:18.896 15:03:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:18.896 15:03:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:18.896 15:03:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:18.896 15:03:34 -- accel/accel.sh@40 -- # local IFS=, 00:16:18.896 15:03:34 -- accel/accel.sh@41 -- # jq -r . 00:16:18.896 [2024-04-18 15:03:34.493932] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:18.896 [2024-04-18 15:03:34.494038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63768 ] 00:16:19.154 [2024-04-18 15:03:34.627797] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.154 [2024-04-18 15:03:34.716861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.154 15:03:34 -- accel/accel.sh@20 -- # val= 00:16:19.154 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.154 15:03:34 -- accel/accel.sh@20 -- # val= 00:16:19.154 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.154 15:03:34 -- accel/accel.sh@20 -- # val=0x1 00:16:19.154 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.154 15:03:34 -- accel/accel.sh@20 -- # val= 00:16:19.154 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.154 15:03:34 -- accel/accel.sh@20 -- # val= 00:16:19.154 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.154 15:03:34 -- accel/accel.sh@20 -- # val=xor 00:16:19.154 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.154 15:03:34 -- accel/accel.sh@23 -- # accel_opc=xor 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.154 15:03:34 -- accel/accel.sh@20 -- # val=3 00:16:19.154 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.154 15:03:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:19.154 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.154 15:03:34 -- accel/accel.sh@20 -- # val= 00:16:19.154 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.154 15:03:34 -- accel/accel.sh@20 -- # val=software 00:16:19.154 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.154 15:03:34 -- accel/accel.sh@22 -- # accel_module=software 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.154 15:03:34 -- accel/accel.sh@20 -- # val=32 00:16:19.154 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.154 15:03:34 -- accel/accel.sh@20 -- # val=32 00:16:19.154 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.154 15:03:34 -- accel/accel.sh@20 -- # val=1 00:16:19.154 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.154 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.154 15:03:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:19.155 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.155 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.155 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.155 15:03:34 -- accel/accel.sh@20 -- # val=Yes 00:16:19.155 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.155 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.155 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.155 15:03:34 -- accel/accel.sh@20 -- # val= 00:16:19.155 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.155 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.155 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:19.155 15:03:34 -- accel/accel.sh@20 -- # val= 00:16:19.155 15:03:34 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.155 15:03:34 -- accel/accel.sh@19 -- # IFS=: 00:16:19.155 15:03:34 -- accel/accel.sh@19 -- # read -r var val 00:16:20.550 15:03:35 -- accel/accel.sh@20 -- # val= 00:16:20.550 15:03:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.550 15:03:35 -- accel/accel.sh@19 -- # IFS=: 00:16:20.550 15:03:35 -- accel/accel.sh@19 -- # read -r var val 00:16:20.550 15:03:35 -- accel/accel.sh@20 -- # val= 00:16:20.550 15:03:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.550 15:03:35 -- accel/accel.sh@19 -- # IFS=: 00:16:20.550 15:03:35 -- accel/accel.sh@19 -- # read -r var val 00:16:20.550 15:03:35 -- accel/accel.sh@20 -- # val= 00:16:20.550 15:03:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.550 15:03:35 -- accel/accel.sh@19 -- # IFS=: 00:16:20.550 15:03:35 -- accel/accel.sh@19 -- # read -r var val 00:16:20.550 15:03:35 -- accel/accel.sh@20 -- # val= 00:16:20.550 15:03:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.550 15:03:35 -- accel/accel.sh@19 -- # IFS=: 00:16:20.550 15:03:35 -- accel/accel.sh@19 -- # read -r var val 00:16:20.550 15:03:35 -- accel/accel.sh@20 -- # val= 00:16:20.550 15:03:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.550 15:03:35 -- accel/accel.sh@19 -- # IFS=: 00:16:20.550 15:03:35 -- accel/accel.sh@19 -- # read -r var val 00:16:20.550 15:03:35 -- accel/accel.sh@20 -- # val= 00:16:20.550 ************************************ 00:16:20.550 END TEST accel_xor 00:16:20.550 ************************************ 00:16:20.551 15:03:35 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.551 15:03:35 -- accel/accel.sh@19 -- # IFS=: 00:16:20.551 15:03:35 -- accel/accel.sh@19 -- # read -r var val 00:16:20.551 15:03:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:20.551 15:03:35 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:16:20.551 15:03:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:20.551 00:16:20.551 real 0m1.458s 00:16:20.551 user 0m1.268s 00:16:20.551 sys 0m0.099s 00:16:20.551 15:03:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:20.551 15:03:35 -- common/autotest_common.sh@10 -- # set +x 00:16:20.551 15:03:35 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:16:20.551 15:03:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:16:20.551 15:03:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:20.551 15:03:35 -- common/autotest_common.sh@10 -- # set +x 00:16:20.551 ************************************ 00:16:20.551 START TEST accel_dif_verify 00:16:20.551 ************************************ 00:16:20.551 15:03:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:16:20.551 15:03:36 -- accel/accel.sh@16 -- # local accel_opc 00:16:20.551 15:03:36 -- accel/accel.sh@17 -- # local accel_module 00:16:20.551 15:03:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:16:20.551 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.551 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.551 15:03:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:16:20.551 15:03:36 -- accel/accel.sh@12 -- # build_accel_config 00:16:20.551 15:03:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:20.551 15:03:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:20.551 15:03:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:20.551 15:03:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:20.551 15:03:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:20.551 15:03:36 -- accel/accel.sh@40 -- # local IFS=, 00:16:20.551 15:03:36 -- accel/accel.sh@41 -- # jq -r . 00:16:20.551 [2024-04-18 15:03:36.073273] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:20.551 [2024-04-18 15:03:36.073326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63812 ] 00:16:20.551 [2024-04-18 15:03:36.213768] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.808 [2024-04-18 15:03:36.298041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.808 15:03:36 -- accel/accel.sh@20 -- # val= 00:16:20.808 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.808 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.808 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val= 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val=0x1 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val= 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val= 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val=dif_verify 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val='512 bytes' 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val='8 bytes' 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val= 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val=software 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@22 -- # accel_module=software 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val=32 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val=32 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val=1 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val=No 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val= 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:20.809 15:03:36 -- accel/accel.sh@20 -- # val= 00:16:20.809 15:03:36 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # IFS=: 00:16:20.809 15:03:36 -- accel/accel.sh@19 -- # read -r var val 00:16:22.183 15:03:37 -- accel/accel.sh@20 -- # val= 00:16:22.183 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.183 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.183 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.183 15:03:37 -- accel/accel.sh@20 -- # val= 00:16:22.183 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.183 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.183 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.183 15:03:37 -- accel/accel.sh@20 -- # val= 00:16:22.183 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.183 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.183 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.183 15:03:37 -- accel/accel.sh@20 -- # val= 00:16:22.183 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.183 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.183 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.183 15:03:37 -- accel/accel.sh@20 -- # val= 00:16:22.183 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.183 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.183 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.183 15:03:37 -- accel/accel.sh@20 -- # val= 00:16:22.183 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.183 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.183 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.183 15:03:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:22.183 15:03:37 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:16:22.183 15:03:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:22.183 00:16:22.183 real 0m1.437s 00:16:22.183 user 0m1.260s 00:16:22.183 sys 0m0.090s 00:16:22.183 15:03:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:22.183 ************************************ 00:16:22.183 END TEST accel_dif_verify 00:16:22.183 15:03:37 -- common/autotest_common.sh@10 -- # set +x 00:16:22.183 ************************************ 00:16:22.183 15:03:37 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:16:22.183 15:03:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:16:22.183 15:03:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:22.183 15:03:37 -- common/autotest_common.sh@10 -- # set +x 00:16:22.183 ************************************ 00:16:22.183 START TEST accel_dif_generate 00:16:22.183 ************************************ 00:16:22.183 15:03:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:16:22.183 15:03:37 -- accel/accel.sh@16 -- # local accel_opc 00:16:22.183 15:03:37 -- accel/accel.sh@17 -- # local accel_module 00:16:22.183 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.183 15:03:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:16:22.183 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.183 15:03:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:16:22.183 15:03:37 -- accel/accel.sh@12 -- # build_accel_config 00:16:22.183 15:03:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:22.183 15:03:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:22.183 15:03:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:22.183 15:03:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:22.183 15:03:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:22.183 15:03:37 -- accel/accel.sh@40 -- # local IFS=, 00:16:22.183 15:03:37 -- accel/accel.sh@41 -- # jq -r . 00:16:22.183 [2024-04-18 15:03:37.678859] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:22.183 [2024-04-18 15:03:37.678959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63851 ] 00:16:22.183 [2024-04-18 15:03:37.820472] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.443 [2024-04-18 15:03:37.903938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.443 15:03:37 -- accel/accel.sh@20 -- # val= 00:16:22.443 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.443 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.443 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.443 15:03:37 -- accel/accel.sh@20 -- # val= 00:16:22.443 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.443 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.443 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.443 15:03:37 -- accel/accel.sh@20 -- # val=0x1 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val= 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val= 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val=dif_generate 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val='512 bytes' 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val='8 bytes' 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val= 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val=software 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@22 -- # accel_module=software 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val=32 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val=32 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val=1 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val=No 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val= 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:22.444 15:03:37 -- accel/accel.sh@20 -- # val= 00:16:22.444 15:03:37 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # IFS=: 00:16:22.444 15:03:37 -- accel/accel.sh@19 -- # read -r var val 00:16:23.822 15:03:39 -- accel/accel.sh@20 -- # val= 00:16:23.822 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.822 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:23.822 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:23.822 15:03:39 -- accel/accel.sh@20 -- # val= 00:16:23.822 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.822 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:23.822 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:23.822 15:03:39 -- accel/accel.sh@20 -- # val= 00:16:23.822 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.822 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:23.822 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:23.822 15:03:39 -- accel/accel.sh@20 -- # val= 00:16:23.822 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.822 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:23.822 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:23.822 15:03:39 -- accel/accel.sh@20 -- # val= 00:16:23.822 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.822 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:23.822 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:23.822 15:03:39 -- accel/accel.sh@20 -- # val= 00:16:23.822 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.822 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:23.822 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:23.822 15:03:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:23.822 15:03:39 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:16:23.822 15:03:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:23.822 00:16:23.822 real 0m1.454s 00:16:23.822 user 0m1.264s 00:16:23.822 sys 0m0.103s 00:16:23.822 15:03:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:23.822 15:03:39 -- common/autotest_common.sh@10 -- # set +x 00:16:23.822 ************************************ 00:16:23.822 END TEST accel_dif_generate 00:16:23.822 ************************************ 00:16:23.822 15:03:39 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:16:23.822 15:03:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:16:23.822 15:03:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:23.822 15:03:39 -- common/autotest_common.sh@10 -- # set +x 00:16:23.822 ************************************ 00:16:23.822 START TEST accel_dif_generate_copy 00:16:23.822 ************************************ 00:16:23.822 15:03:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:16:23.822 15:03:39 -- accel/accel.sh@16 -- # local accel_opc 00:16:23.822 15:03:39 -- accel/accel.sh@17 -- # local accel_module 00:16:23.822 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:23.822 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:23.822 15:03:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:16:23.822 15:03:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:16:23.822 15:03:39 -- accel/accel.sh@12 -- # build_accel_config 00:16:23.822 15:03:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:23.822 15:03:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:23.822 15:03:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:23.822 15:03:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:23.822 15:03:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:23.822 15:03:39 -- accel/accel.sh@40 -- # local IFS=, 00:16:23.822 15:03:39 -- accel/accel.sh@41 -- # jq -r . 00:16:23.822 [2024-04-18 15:03:39.289153] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:23.822 [2024-04-18 15:03:39.289230] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63889 ] 00:16:23.822 [2024-04-18 15:03:39.429380] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.822 [2024-04-18 15:03:39.508849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val= 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val= 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val=0x1 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val= 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val= 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val= 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val=software 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@22 -- # accel_module=software 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val=32 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val=32 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val=1 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val=No 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val= 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:24.081 15:03:39 -- accel/accel.sh@20 -- # val= 00:16:24.081 15:03:39 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # IFS=: 00:16:24.081 15:03:39 -- accel/accel.sh@19 -- # read -r var val 00:16:25.016 15:03:40 -- accel/accel.sh@20 -- # val= 00:16:25.016 15:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.016 15:03:40 -- accel/accel.sh@19 -- # IFS=: 00:16:25.016 15:03:40 -- accel/accel.sh@19 -- # read -r var val 00:16:25.016 15:03:40 -- accel/accel.sh@20 -- # val= 00:16:25.016 15:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.016 15:03:40 -- accel/accel.sh@19 -- # IFS=: 00:16:25.016 15:03:40 -- accel/accel.sh@19 -- # read -r var val 00:16:25.016 15:03:40 -- accel/accel.sh@20 -- # val= 00:16:25.016 15:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.016 15:03:40 -- accel/accel.sh@19 -- # IFS=: 00:16:25.016 15:03:40 -- accel/accel.sh@19 -- # read -r var val 00:16:25.016 15:03:40 -- accel/accel.sh@20 -- # val= 00:16:25.016 15:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.016 15:03:40 -- accel/accel.sh@19 -- # IFS=: 00:16:25.016 15:03:40 -- accel/accel.sh@19 -- # read -r var val 00:16:25.016 15:03:40 -- accel/accel.sh@20 -- # val= 00:16:25.016 15:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.016 15:03:40 -- accel/accel.sh@19 -- # IFS=: 00:16:25.016 15:03:40 -- accel/accel.sh@19 -- # read -r var val 00:16:25.016 15:03:40 -- accel/accel.sh@20 -- # val= 00:16:25.016 15:03:40 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.016 15:03:40 -- accel/accel.sh@19 -- # IFS=: 00:16:25.016 15:03:40 -- accel/accel.sh@19 -- # read -r var val 00:16:25.016 ************************************ 00:16:25.016 END TEST accel_dif_generate_copy 00:16:25.016 ************************************ 00:16:25.016 15:03:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:25.016 15:03:40 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:16:25.016 15:03:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:25.016 00:16:25.016 real 0m1.450s 00:16:25.016 user 0m1.260s 00:16:25.016 sys 0m0.102s 00:16:25.016 15:03:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:25.016 15:03:40 -- common/autotest_common.sh@10 -- # set +x 00:16:25.275 15:03:40 -- accel/accel.sh@115 -- # [[ y == y ]] 00:16:25.275 15:03:40 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:25.275 15:03:40 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:16:25.275 15:03:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.275 15:03:40 -- common/autotest_common.sh@10 -- # set +x 00:16:25.275 ************************************ 00:16:25.275 START TEST accel_comp 00:16:25.275 ************************************ 00:16:25.275 15:03:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:25.275 15:03:40 -- accel/accel.sh@16 -- # local accel_opc 00:16:25.275 15:03:40 -- accel/accel.sh@17 -- # local accel_module 00:16:25.275 15:03:40 -- accel/accel.sh@19 -- # IFS=: 00:16:25.275 15:03:40 -- accel/accel.sh@19 -- # read -r var val 00:16:25.275 15:03:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:25.275 15:03:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:25.275 15:03:40 -- accel/accel.sh@12 -- # build_accel_config 00:16:25.275 15:03:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:25.275 15:03:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:25.275 15:03:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:25.275 15:03:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:25.275 15:03:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:25.275 15:03:40 -- accel/accel.sh@40 -- # local IFS=, 00:16:25.275 15:03:40 -- accel/accel.sh@41 -- # jq -r . 00:16:25.275 [2024-04-18 15:03:40.897221] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:25.275 [2024-04-18 15:03:40.897289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63929 ] 00:16:25.534 [2024-04-18 15:03:41.039124] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.534 [2024-04-18 15:03:41.122087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val= 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val= 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val= 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val=0x1 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val= 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val= 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val=compress 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@23 -- # accel_opc=compress 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val= 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val=software 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@22 -- # accel_module=software 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val=32 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val=32 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val=1 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val=No 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val= 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:25.534 15:03:41 -- accel/accel.sh@20 -- # val= 00:16:25.534 15:03:41 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # IFS=: 00:16:25.534 15:03:41 -- accel/accel.sh@19 -- # read -r var val 00:16:26.938 15:03:42 -- accel/accel.sh@20 -- # val= 00:16:26.938 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.938 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:26.938 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:26.938 15:03:42 -- accel/accel.sh@20 -- # val= 00:16:26.938 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.938 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:26.938 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:26.938 15:03:42 -- accel/accel.sh@20 -- # val= 00:16:26.938 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.938 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:26.938 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:26.938 15:03:42 -- accel/accel.sh@20 -- # val= 00:16:26.938 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.938 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:26.938 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:26.938 15:03:42 -- accel/accel.sh@20 -- # val= 00:16:26.938 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.938 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:26.938 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:26.938 15:03:42 -- accel/accel.sh@20 -- # val= 00:16:26.938 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.938 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:26.938 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:26.938 15:03:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:26.938 15:03:42 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:16:26.938 15:03:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:26.938 00:16:26.938 real 0m1.459s 00:16:26.938 user 0m1.267s 00:16:26.938 sys 0m0.106s 00:16:26.938 15:03:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:26.938 15:03:42 -- common/autotest_common.sh@10 -- # set +x 00:16:26.938 ************************************ 00:16:26.938 END TEST accel_comp 00:16:26.938 ************************************ 00:16:26.938 15:03:42 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:26.938 15:03:42 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:26.938 15:03:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:26.938 15:03:42 -- common/autotest_common.sh@10 -- # set +x 00:16:26.938 ************************************ 00:16:26.938 START TEST accel_decomp 00:16:26.938 ************************************ 00:16:26.938 15:03:42 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:26.938 15:03:42 -- accel/accel.sh@16 -- # local accel_opc 00:16:26.938 15:03:42 -- accel/accel.sh@17 -- # local accel_module 00:16:26.938 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:26.938 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:26.938 15:03:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:26.938 15:03:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:26.938 15:03:42 -- accel/accel.sh@12 -- # build_accel_config 00:16:26.938 15:03:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:26.938 15:03:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:26.938 15:03:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:26.938 15:03:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:26.938 15:03:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:26.938 15:03:42 -- accel/accel.sh@40 -- # local IFS=, 00:16:26.938 15:03:42 -- accel/accel.sh@41 -- # jq -r . 00:16:26.938 [2024-04-18 15:03:42.509471] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:26.938 [2024-04-18 15:03:42.509562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63967 ] 00:16:27.197 [2024-04-18 15:03:42.650272] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.197 [2024-04-18 15:03:42.736521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.197 15:03:42 -- accel/accel.sh@20 -- # val= 00:16:27.197 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.197 15:03:42 -- accel/accel.sh@20 -- # val= 00:16:27.197 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.197 15:03:42 -- accel/accel.sh@20 -- # val= 00:16:27.197 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.197 15:03:42 -- accel/accel.sh@20 -- # val=0x1 00:16:27.197 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.197 15:03:42 -- accel/accel.sh@20 -- # val= 00:16:27.197 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.197 15:03:42 -- accel/accel.sh@20 -- # val= 00:16:27.197 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.197 15:03:42 -- accel/accel.sh@20 -- # val=decompress 00:16:27.197 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.197 15:03:42 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.197 15:03:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:27.197 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.197 15:03:42 -- accel/accel.sh@20 -- # val= 00:16:27.197 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.197 15:03:42 -- accel/accel.sh@20 -- # val=software 00:16:27.197 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.197 15:03:42 -- accel/accel.sh@22 -- # accel_module=software 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.197 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.198 15:03:42 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:27.198 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.198 15:03:42 -- accel/accel.sh@20 -- # val=32 00:16:27.198 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.198 15:03:42 -- accel/accel.sh@20 -- # val=32 00:16:27.198 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.198 15:03:42 -- accel/accel.sh@20 -- # val=1 00:16:27.198 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.198 15:03:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:27.198 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.198 15:03:42 -- accel/accel.sh@20 -- # val=Yes 00:16:27.198 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.198 15:03:42 -- accel/accel.sh@20 -- # val= 00:16:27.198 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:27.198 15:03:42 -- accel/accel.sh@20 -- # val= 00:16:27.198 15:03:42 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # IFS=: 00:16:27.198 15:03:42 -- accel/accel.sh@19 -- # read -r var val 00:16:28.575 15:03:43 -- accel/accel.sh@20 -- # val= 00:16:28.575 15:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.575 15:03:43 -- accel/accel.sh@19 -- # IFS=: 00:16:28.575 15:03:43 -- accel/accel.sh@19 -- # read -r var val 00:16:28.575 15:03:43 -- accel/accel.sh@20 -- # val= 00:16:28.575 15:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.575 15:03:43 -- accel/accel.sh@19 -- # IFS=: 00:16:28.575 15:03:43 -- accel/accel.sh@19 -- # read -r var val 00:16:28.575 15:03:43 -- accel/accel.sh@20 -- # val= 00:16:28.575 15:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.575 15:03:43 -- accel/accel.sh@19 -- # IFS=: 00:16:28.575 15:03:43 -- accel/accel.sh@19 -- # read -r var val 00:16:28.575 15:03:43 -- accel/accel.sh@20 -- # val= 00:16:28.575 15:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.575 15:03:43 -- accel/accel.sh@19 -- # IFS=: 00:16:28.575 15:03:43 -- accel/accel.sh@19 -- # read -r var val 00:16:28.575 15:03:43 -- accel/accel.sh@20 -- # val= 00:16:28.575 15:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.575 15:03:43 -- accel/accel.sh@19 -- # IFS=: 00:16:28.575 15:03:43 -- accel/accel.sh@19 -- # read -r var val 00:16:28.575 15:03:43 -- accel/accel.sh@20 -- # val= 00:16:28.575 15:03:43 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.575 15:03:43 -- accel/accel.sh@19 -- # IFS=: 00:16:28.575 15:03:43 -- accel/accel.sh@19 -- # read -r var val 00:16:28.575 15:03:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:28.575 15:03:43 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:28.575 15:03:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:28.575 00:16:28.575 real 0m1.465s 00:16:28.575 user 0m1.268s 00:16:28.575 sys 0m0.109s 00:16:28.575 15:03:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:28.575 15:03:43 -- common/autotest_common.sh@10 -- # set +x 00:16:28.575 ************************************ 00:16:28.575 END TEST accel_decomp 00:16:28.575 ************************************ 00:16:28.575 15:03:43 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:28.575 15:03:43 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:16:28.575 15:03:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:28.575 15:03:43 -- common/autotest_common.sh@10 -- # set +x 00:16:28.575 ************************************ 00:16:28.575 START TEST accel_decmop_full 00:16:28.575 ************************************ 00:16:28.575 15:03:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:28.575 15:03:44 -- accel/accel.sh@16 -- # local accel_opc 00:16:28.575 15:03:44 -- accel/accel.sh@17 -- # local accel_module 00:16:28.575 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.575 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.575 15:03:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:28.575 15:03:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:28.575 15:03:44 -- accel/accel.sh@12 -- # build_accel_config 00:16:28.575 15:03:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:28.575 15:03:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:28.575 15:03:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:28.575 15:03:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:28.575 15:03:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:28.575 15:03:44 -- accel/accel.sh@40 -- # local IFS=, 00:16:28.575 15:03:44 -- accel/accel.sh@41 -- # jq -r . 00:16:28.575 [2024-04-18 15:03:44.135771] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:28.575 [2024-04-18 15:03:44.136413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64009 ] 00:16:28.575 [2024-04-18 15:03:44.277843] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.835 [2024-04-18 15:03:44.367650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val= 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val= 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val= 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val=0x1 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val= 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val= 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val=decompress 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val= 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val=software 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@22 -- # accel_module=software 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val=32 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val=32 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val=1 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val=Yes 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val= 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:28.835 15:03:44 -- accel/accel.sh@20 -- # val= 00:16:28.835 15:03:44 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # IFS=: 00:16:28.835 15:03:44 -- accel/accel.sh@19 -- # read -r var val 00:16:30.213 15:03:45 -- accel/accel.sh@20 -- # val= 00:16:30.213 15:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.213 15:03:45 -- accel/accel.sh@19 -- # IFS=: 00:16:30.213 15:03:45 -- accel/accel.sh@19 -- # read -r var val 00:16:30.213 15:03:45 -- accel/accel.sh@20 -- # val= 00:16:30.213 15:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.213 15:03:45 -- accel/accel.sh@19 -- # IFS=: 00:16:30.213 15:03:45 -- accel/accel.sh@19 -- # read -r var val 00:16:30.213 15:03:45 -- accel/accel.sh@20 -- # val= 00:16:30.213 15:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.213 15:03:45 -- accel/accel.sh@19 -- # IFS=: 00:16:30.213 15:03:45 -- accel/accel.sh@19 -- # read -r var val 00:16:30.213 15:03:45 -- accel/accel.sh@20 -- # val= 00:16:30.213 15:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.213 15:03:45 -- accel/accel.sh@19 -- # IFS=: 00:16:30.213 15:03:45 -- accel/accel.sh@19 -- # read -r var val 00:16:30.213 15:03:45 -- accel/accel.sh@20 -- # val= 00:16:30.213 15:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.213 15:03:45 -- accel/accel.sh@19 -- # IFS=: 00:16:30.213 15:03:45 -- accel/accel.sh@19 -- # read -r var val 00:16:30.213 15:03:45 -- accel/accel.sh@20 -- # val= 00:16:30.213 15:03:45 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.213 15:03:45 -- accel/accel.sh@19 -- # IFS=: 00:16:30.213 15:03:45 -- accel/accel.sh@19 -- # read -r var val 00:16:30.213 15:03:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:30.213 15:03:45 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:30.213 15:03:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:30.213 00:16:30.213 real 0m1.487s 00:16:30.213 user 0m1.293s 00:16:30.213 sys 0m0.103s 00:16:30.213 15:03:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:30.213 15:03:45 -- common/autotest_common.sh@10 -- # set +x 00:16:30.213 ************************************ 00:16:30.213 END TEST accel_decmop_full 00:16:30.213 ************************************ 00:16:30.213 15:03:45 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:30.213 15:03:45 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:16:30.213 15:03:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:30.213 15:03:45 -- common/autotest_common.sh@10 -- # set +x 00:16:30.213 ************************************ 00:16:30.213 START TEST accel_decomp_mcore 00:16:30.213 ************************************ 00:16:30.213 15:03:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:30.213 15:03:45 -- accel/accel.sh@16 -- # local accel_opc 00:16:30.213 15:03:45 -- accel/accel.sh@17 -- # local accel_module 00:16:30.213 15:03:45 -- accel/accel.sh@19 -- # IFS=: 00:16:30.213 15:03:45 -- accel/accel.sh@19 -- # read -r var val 00:16:30.213 15:03:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:30.213 15:03:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:30.213 15:03:45 -- accel/accel.sh@12 -- # build_accel_config 00:16:30.213 15:03:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:30.213 15:03:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:30.213 15:03:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:30.213 15:03:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:30.213 15:03:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:30.213 15:03:45 -- accel/accel.sh@40 -- # local IFS=, 00:16:30.213 15:03:45 -- accel/accel.sh@41 -- # jq -r . 00:16:30.213 [2024-04-18 15:03:45.766088] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:30.213 [2024-04-18 15:03:45.766184] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64046 ] 00:16:30.213 [2024-04-18 15:03:45.910976] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:30.472 [2024-04-18 15:03:46.010034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.472 [2024-04-18 15:03:46.010120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.472 [2024-04-18 15:03:46.010240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.472 [2024-04-18 15:03:46.010240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val= 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val= 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val= 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val=0xf 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val= 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val= 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val=decompress 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val= 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val=software 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@22 -- # accel_module=software 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val=32 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val=32 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val=1 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val=Yes 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val= 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:30.472 15:03:46 -- accel/accel.sh@20 -- # val= 00:16:30.472 15:03:46 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # IFS=: 00:16:30.472 15:03:46 -- accel/accel.sh@19 -- # read -r var val 00:16:31.851 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:31.851 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:31.851 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:31.851 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:31.851 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:31.851 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:31.851 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:31.851 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:31.851 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:31.852 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:31.852 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:31.852 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:31.852 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:31.852 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:31.852 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:31.852 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:31.852 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:31.852 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:31.852 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:31.852 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:31.852 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:31.852 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:31.852 15:03:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:31.852 15:03:47 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:31.852 15:03:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:31.852 00:16:31.852 real 0m1.514s 00:16:31.852 user 0m4.633s 00:16:31.852 sys 0m0.126s 00:16:31.852 15:03:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:31.852 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:16:31.852 ************************************ 00:16:31.852 END TEST accel_decomp_mcore 00:16:31.852 ************************************ 00:16:31.852 15:03:47 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:31.852 15:03:47 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:16:31.852 15:03:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:31.852 15:03:47 -- common/autotest_common.sh@10 -- # set +x 00:16:31.852 ************************************ 00:16:31.852 START TEST accel_decomp_full_mcore 00:16:31.852 ************************************ 00:16:31.852 15:03:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:31.852 15:03:47 -- accel/accel.sh@16 -- # local accel_opc 00:16:31.852 15:03:47 -- accel/accel.sh@17 -- # local accel_module 00:16:31.852 15:03:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:31.852 15:03:47 -- accel/accel.sh@12 -- # build_accel_config 00:16:31.852 15:03:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:31.852 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:31.852 15:03:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:31.852 15:03:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:31.852 15:03:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:31.852 15:03:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:31.852 15:03:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:31.852 15:03:47 -- accel/accel.sh@40 -- # local IFS=, 00:16:31.852 15:03:47 -- accel/accel.sh@41 -- # jq -r . 00:16:31.852 [2024-04-18 15:03:47.425436] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:31.852 [2024-04-18 15:03:47.425680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64093 ] 00:16:32.110 [2024-04-18 15:03:47.570073] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:32.110 [2024-04-18 15:03:47.678749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.110 [2024-04-18 15:03:47.678880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.110 [2024-04-18 15:03:47.678971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.110 [2024-04-18 15:03:47.678975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val=0xf 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val=decompress 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val=software 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@22 -- # accel_module=software 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val=32 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val=32 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val=1 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val=Yes 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.110 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:32.110 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.110 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:32.111 15:03:47 -- accel/accel.sh@20 -- # val= 00:16:32.111 15:03:47 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.111 15:03:47 -- accel/accel.sh@19 -- # IFS=: 00:16:32.111 15:03:47 -- accel/accel.sh@19 -- # read -r var val 00:16:33.488 15:03:48 -- accel/accel.sh@20 -- # val= 00:16:33.488 15:03:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # IFS=: 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # read -r var val 00:16:33.488 15:03:48 -- accel/accel.sh@20 -- # val= 00:16:33.488 15:03:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # IFS=: 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # read -r var val 00:16:33.488 15:03:48 -- accel/accel.sh@20 -- # val= 00:16:33.488 15:03:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # IFS=: 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # read -r var val 00:16:33.488 15:03:48 -- accel/accel.sh@20 -- # val= 00:16:33.488 15:03:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # IFS=: 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # read -r var val 00:16:33.488 15:03:48 -- accel/accel.sh@20 -- # val= 00:16:33.488 15:03:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # IFS=: 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # read -r var val 00:16:33.488 15:03:48 -- accel/accel.sh@20 -- # val= 00:16:33.488 15:03:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # IFS=: 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # read -r var val 00:16:33.488 15:03:48 -- accel/accel.sh@20 -- # val= 00:16:33.488 15:03:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # IFS=: 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # read -r var val 00:16:33.488 15:03:48 -- accel/accel.sh@20 -- # val= 00:16:33.488 15:03:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # IFS=: 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # read -r var val 00:16:33.488 15:03:48 -- accel/accel.sh@20 -- # val= 00:16:33.488 15:03:48 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # IFS=: 00:16:33.488 15:03:48 -- accel/accel.sh@19 -- # read -r var val 00:16:33.488 ************************************ 00:16:33.488 END TEST accel_decomp_full_mcore 00:16:33.488 ************************************ 00:16:33.488 15:03:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:33.488 15:03:48 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:33.488 15:03:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:33.488 00:16:33.488 real 0m1.515s 00:16:33.488 user 0m0.011s 00:16:33.488 sys 0m0.005s 00:16:33.488 15:03:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:33.488 15:03:48 -- common/autotest_common.sh@10 -- # set +x 00:16:33.488 15:03:48 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:33.488 15:03:48 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:16:33.488 15:03:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:33.488 15:03:48 -- common/autotest_common.sh@10 -- # set +x 00:16:33.488 ************************************ 00:16:33.488 START TEST accel_decomp_mthread 00:16:33.488 ************************************ 00:16:33.488 15:03:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:33.488 15:03:49 -- accel/accel.sh@16 -- # local accel_opc 00:16:33.488 15:03:49 -- accel/accel.sh@17 -- # local accel_module 00:16:33.488 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.488 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.488 15:03:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:33.488 15:03:49 -- accel/accel.sh@12 -- # build_accel_config 00:16:33.488 15:03:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:33.488 15:03:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:33.488 15:03:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:33.488 15:03:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:33.488 15:03:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:33.488 15:03:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:33.488 15:03:49 -- accel/accel.sh@40 -- # local IFS=, 00:16:33.488 15:03:49 -- accel/accel.sh@41 -- # jq -r . 00:16:33.488 [2024-04-18 15:03:49.094575] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:33.488 [2024-04-18 15:03:49.094654] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64135 ] 00:16:33.746 [2024-04-18 15:03:49.237045] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.746 [2024-04-18 15:03:49.324992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.746 15:03:49 -- accel/accel.sh@20 -- # val= 00:16:33.746 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.746 15:03:49 -- accel/accel.sh@20 -- # val= 00:16:33.746 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.746 15:03:49 -- accel/accel.sh@20 -- # val= 00:16:33.746 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.746 15:03:49 -- accel/accel.sh@20 -- # val=0x1 00:16:33.746 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.746 15:03:49 -- accel/accel.sh@20 -- # val= 00:16:33.746 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.746 15:03:49 -- accel/accel.sh@20 -- # val= 00:16:33.746 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.746 15:03:49 -- accel/accel.sh@20 -- # val=decompress 00:16:33.746 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.746 15:03:49 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.746 15:03:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:33.746 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.746 15:03:49 -- accel/accel.sh@20 -- # val= 00:16:33.746 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.746 15:03:49 -- accel/accel.sh@20 -- # val=software 00:16:33.746 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.746 15:03:49 -- accel/accel.sh@22 -- # accel_module=software 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.746 15:03:49 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:33.746 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.746 15:03:49 -- accel/accel.sh@20 -- # val=32 00:16:33.746 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.746 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.746 15:03:49 -- accel/accel.sh@20 -- # val=32 00:16:33.746 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.747 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.747 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.747 15:03:49 -- accel/accel.sh@20 -- # val=2 00:16:33.747 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.747 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.747 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.747 15:03:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:33.747 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.747 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.747 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.747 15:03:49 -- accel/accel.sh@20 -- # val=Yes 00:16:33.747 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.747 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.747 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.747 15:03:49 -- accel/accel.sh@20 -- # val= 00:16:33.747 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.747 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.747 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:33.747 15:03:49 -- accel/accel.sh@20 -- # val= 00:16:33.747 15:03:49 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.747 15:03:49 -- accel/accel.sh@19 -- # IFS=: 00:16:33.747 15:03:49 -- accel/accel.sh@19 -- # read -r var val 00:16:35.156 15:03:50 -- accel/accel.sh@20 -- # val= 00:16:35.156 15:03:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # IFS=: 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # read -r var val 00:16:35.156 15:03:50 -- accel/accel.sh@20 -- # val= 00:16:35.156 15:03:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # IFS=: 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # read -r var val 00:16:35.156 15:03:50 -- accel/accel.sh@20 -- # val= 00:16:35.156 15:03:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # IFS=: 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # read -r var val 00:16:35.156 15:03:50 -- accel/accel.sh@20 -- # val= 00:16:35.156 15:03:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # IFS=: 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # read -r var val 00:16:35.156 15:03:50 -- accel/accel.sh@20 -- # val= 00:16:35.156 15:03:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # IFS=: 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # read -r var val 00:16:35.156 15:03:50 -- accel/accel.sh@20 -- # val= 00:16:35.156 15:03:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # IFS=: 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # read -r var val 00:16:35.156 15:03:50 -- accel/accel.sh@20 -- # val= 00:16:35.156 15:03:50 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # IFS=: 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # read -r var val 00:16:35.156 15:03:50 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:35.156 15:03:50 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:35.156 15:03:50 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:35.156 00:16:35.156 real 0m1.483s 00:16:35.156 user 0m1.282s 00:16:35.156 sys 0m0.114s 00:16:35.156 ************************************ 00:16:35.156 END TEST accel_decomp_mthread 00:16:35.156 ************************************ 00:16:35.156 15:03:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:35.156 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:16:35.156 15:03:50 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:35.156 15:03:50 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:16:35.156 15:03:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:35.156 15:03:50 -- common/autotest_common.sh@10 -- # set +x 00:16:35.156 ************************************ 00:16:35.156 START TEST accel_deomp_full_mthread 00:16:35.156 ************************************ 00:16:35.156 15:03:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:35.156 15:03:50 -- accel/accel.sh@16 -- # local accel_opc 00:16:35.156 15:03:50 -- accel/accel.sh@17 -- # local accel_module 00:16:35.156 15:03:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # IFS=: 00:16:35.156 15:03:50 -- accel/accel.sh@19 -- # read -r var val 00:16:35.156 15:03:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:35.156 15:03:50 -- accel/accel.sh@12 -- # build_accel_config 00:16:35.156 15:03:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:35.156 15:03:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:35.156 15:03:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:35.156 15:03:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:35.156 15:03:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:35.156 15:03:50 -- accel/accel.sh@40 -- # local IFS=, 00:16:35.156 15:03:50 -- accel/accel.sh@41 -- # jq -r . 00:16:35.156 [2024-04-18 15:03:50.728647] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:35.156 [2024-04-18 15:03:50.728748] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64173 ] 00:16:35.417 [2024-04-18 15:03:50.871683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.417 [2024-04-18 15:03:50.970408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val= 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val= 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val= 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val=0x1 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val= 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val= 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val=decompress 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val= 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val=software 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@22 -- # accel_module=software 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val=32 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val=32 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val=2 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val=Yes 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val= 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:35.417 15:03:51 -- accel/accel.sh@20 -- # val= 00:16:35.417 15:03:51 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # IFS=: 00:16:35.417 15:03:51 -- accel/accel.sh@19 -- # read -r var val 00:16:36.793 15:03:52 -- accel/accel.sh@20 -- # val= 00:16:36.793 15:03:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.793 15:03:52 -- accel/accel.sh@19 -- # IFS=: 00:16:36.793 15:03:52 -- accel/accel.sh@19 -- # read -r var val 00:16:36.793 15:03:52 -- accel/accel.sh@20 -- # val= 00:16:36.793 15:03:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.793 15:03:52 -- accel/accel.sh@19 -- # IFS=: 00:16:36.793 15:03:52 -- accel/accel.sh@19 -- # read -r var val 00:16:36.793 15:03:52 -- accel/accel.sh@20 -- # val= 00:16:36.793 15:03:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.793 15:03:52 -- accel/accel.sh@19 -- # IFS=: 00:16:36.793 15:03:52 -- accel/accel.sh@19 -- # read -r var val 00:16:36.793 15:03:52 -- accel/accel.sh@20 -- # val= 00:16:36.793 15:03:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.793 15:03:52 -- accel/accel.sh@19 -- # IFS=: 00:16:36.793 15:03:52 -- accel/accel.sh@19 -- # read -r var val 00:16:36.793 15:03:52 -- accel/accel.sh@20 -- # val= 00:16:36.793 15:03:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.793 15:03:52 -- accel/accel.sh@19 -- # IFS=: 00:16:36.793 15:03:52 -- accel/accel.sh@19 -- # read -r var val 00:16:36.793 15:03:52 -- accel/accel.sh@20 -- # val= 00:16:36.793 15:03:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.793 15:03:52 -- accel/accel.sh@19 -- # IFS=: 00:16:36.793 15:03:52 -- accel/accel.sh@19 -- # read -r var val 00:16:36.793 15:03:52 -- accel/accel.sh@20 -- # val= 00:16:36.793 15:03:52 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.793 15:03:52 -- accel/accel.sh@19 -- # IFS=: 00:16:36.793 15:03:52 -- accel/accel.sh@19 -- # read -r var val 00:16:36.793 15:03:52 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:36.793 15:03:52 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:36.793 15:03:52 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:36.793 00:16:36.793 real 0m1.527s 00:16:36.793 user 0m1.325s 00:16:36.793 sys 0m0.110s 00:16:36.793 ************************************ 00:16:36.793 END TEST accel_deomp_full_mthread 00:16:36.793 ************************************ 00:16:36.793 15:03:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:36.793 15:03:52 -- common/autotest_common.sh@10 -- # set +x 00:16:36.793 15:03:52 -- accel/accel.sh@124 -- # [[ n == y ]] 00:16:36.793 15:03:52 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:16:36.793 15:03:52 -- accel/accel.sh@137 -- # build_accel_config 00:16:36.793 15:03:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:36.793 15:03:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:36.793 15:03:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:36.793 15:03:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:36.793 15:03:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:36.793 15:03:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:36.793 15:03:52 -- common/autotest_common.sh@10 -- # set +x 00:16:36.793 15:03:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:36.793 15:03:52 -- accel/accel.sh@40 -- # local IFS=, 00:16:36.793 15:03:52 -- accel/accel.sh@41 -- # jq -r . 00:16:36.793 ************************************ 00:16:36.793 START TEST accel_dif_functional_tests 00:16:36.793 ************************************ 00:16:36.793 15:03:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:16:36.793 [2024-04-18 15:03:52.441653] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:36.793 [2024-04-18 15:03:52.441738] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64213 ] 00:16:37.053 [2024-04-18 15:03:52.583060] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:37.053 [2024-04-18 15:03:52.676673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.053 [2024-04-18 15:03:52.676786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.053 [2024-04-18 15:03:52.676786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.053 00:16:37.053 00:16:37.053 CUnit - A unit testing framework for C - Version 2.1-3 00:16:37.053 http://cunit.sourceforge.net/ 00:16:37.053 00:16:37.053 00:16:37.053 Suite: accel_dif 00:16:37.053 Test: verify: DIF generated, GUARD check ...passed 00:16:37.053 Test: verify: DIF generated, APPTAG check ...passed 00:16:37.053 Test: verify: DIF generated, REFTAG check ...passed 00:16:37.053 Test: verify: DIF not generated, GUARD check ...passed 00:16:37.053 Test: verify: DIF not generated, APPTAG check ...passed 00:16:37.053 Test: verify: DIF not generated, REFTAG check ...passed 00:16:37.053 Test: verify: APPTAG correct, APPTAG check ...[2024-04-18 15:03:52.750117] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:16:37.053 [2024-04-18 15:03:52.750228] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:16:37.053 [2024-04-18 15:03:52.750265] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:16:37.053 [2024-04-18 15:03:52.750286] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:16:37.053 [2024-04-18 15:03:52.750307] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:16:37.053 [2024-04-18 15:03:52.750360] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:16:37.053 passed 00:16:37.053 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:16:37.053 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:16:37.053 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:16:37.053 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:16:37.053 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-18 15:03:52.750516] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:16:37.053 [2024-04-18 15:03:52.750674] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:16:37.053 passed 00:16:37.053 Test: generate copy: DIF generated, GUARD check ...passed 00:16:37.053 Test: generate copy: DIF generated, APTTAG check ...passed 00:16:37.053 Test: generate copy: DIF generated, REFTAG check ...passed 00:16:37.053 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:16:37.053 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:16:37.053 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:16:37.053 Test: generate copy: iovecs-len validate ...passed 00:16:37.053 Test: generate copy: buffer alignment validate ...passed 00:16:37.053 00:16:37.053 Run Summary: Type Total Ran Passed Failed Inactive 00:16:37.053 suites 1 1 n/a 0 0 00:16:37.053 tests 20 20 20 0 0 00:16:37.053 asserts 204 204 204 0 n/a 00:16:37.053 00:16:37.053 Elapsed time = 0.002 seconds 00:16:37.053 [2024-04-18 15:03:52.751039] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:16:37.312 00:16:37.312 real 0m0.564s 00:16:37.312 user 0m0.657s 00:16:37.312 sys 0m0.132s 00:16:37.312 15:03:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:37.312 ************************************ 00:16:37.312 END TEST accel_dif_functional_tests 00:16:37.312 15:03:52 -- common/autotest_common.sh@10 -- # set +x 00:16:37.312 ************************************ 00:16:37.312 00:16:37.312 real 0m36.621s 00:16:37.312 user 0m36.647s 00:16:37.312 sys 0m5.051s 00:16:37.312 15:03:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:37.312 15:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.312 ************************************ 00:16:37.312 END TEST accel 00:16:37.312 ************************************ 00:16:37.570 15:03:53 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:16:37.571 15:03:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:37.571 15:03:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:37.571 15:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.571 ************************************ 00:16:37.571 START TEST accel_rpc 00:16:37.571 ************************************ 00:16:37.571 15:03:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:16:37.829 * Looking for test storage... 00:16:37.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:16:37.829 15:03:53 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:37.829 15:03:53 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64288 00:16:37.829 15:03:53 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:37.829 15:03:53 -- accel/accel_rpc.sh@15 -- # waitforlisten 64288 00:16:37.829 15:03:53 -- common/autotest_common.sh@817 -- # '[' -z 64288 ']' 00:16:37.829 15:03:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.829 15:03:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:37.829 15:03:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.829 15:03:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:37.829 15:03:53 -- common/autotest_common.sh@10 -- # set +x 00:16:37.829 [2024-04-18 15:03:53.377758] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:37.829 [2024-04-18 15:03:53.378087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64288 ] 00:16:37.829 [2024-04-18 15:03:53.515758] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.086 [2024-04-18 15:03:53.605620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.653 15:03:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:38.653 15:03:54 -- common/autotest_common.sh@850 -- # return 0 00:16:38.653 15:03:54 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:16:38.653 15:03:54 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:16:38.653 15:03:54 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:16:38.653 15:03:54 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:16:38.653 15:03:54 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:16:38.653 15:03:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:38.653 15:03:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.653 15:03:54 -- common/autotest_common.sh@10 -- # set +x 00:16:38.912 ************************************ 00:16:38.912 START TEST accel_assign_opcode 00:16:38.912 ************************************ 00:16:38.912 15:03:54 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:16:38.912 15:03:54 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:16:38.912 15:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.912 15:03:54 -- common/autotest_common.sh@10 -- # set +x 00:16:38.912 [2024-04-18 15:03:54.380941] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:16:38.912 15:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:38.912 15:03:54 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:16:38.912 15:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.912 15:03:54 -- common/autotest_common.sh@10 -- # set +x 00:16:38.912 [2024-04-18 15:03:54.392906] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:16:38.913 15:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:38.913 15:03:54 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:16:38.913 15:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.913 15:03:54 -- common/autotest_common.sh@10 -- # set +x 00:16:38.913 15:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:38.913 15:03:54 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:16:38.913 15:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.913 15:03:54 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:16:38.913 15:03:54 -- common/autotest_common.sh@10 -- # set +x 00:16:38.913 15:03:54 -- accel/accel_rpc.sh@42 -- # grep software 00:16:38.913 15:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.173 software 00:16:39.173 ************************************ 00:16:39.173 END TEST accel_assign_opcode 00:16:39.173 ************************************ 00:16:39.173 00:16:39.173 real 0m0.257s 00:16:39.173 user 0m0.048s 00:16:39.173 sys 0m0.018s 00:16:39.173 15:03:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:39.173 15:03:54 -- common/autotest_common.sh@10 -- # set +x 00:16:39.173 15:03:54 -- accel/accel_rpc.sh@55 -- # killprocess 64288 00:16:39.173 15:03:54 -- common/autotest_common.sh@936 -- # '[' -z 64288 ']' 00:16:39.173 15:03:54 -- common/autotest_common.sh@940 -- # kill -0 64288 00:16:39.173 15:03:54 -- common/autotest_common.sh@941 -- # uname 00:16:39.173 15:03:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:39.173 15:03:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64288 00:16:39.173 killing process with pid 64288 00:16:39.173 15:03:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:39.173 15:03:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:39.173 15:03:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64288' 00:16:39.173 15:03:54 -- common/autotest_common.sh@955 -- # kill 64288 00:16:39.173 15:03:54 -- common/autotest_common.sh@960 -- # wait 64288 00:16:39.432 00:16:39.432 real 0m1.898s 00:16:39.432 user 0m1.938s 00:16:39.432 sys 0m0.528s 00:16:39.432 15:03:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:39.432 ************************************ 00:16:39.432 END TEST accel_rpc 00:16:39.432 ************************************ 00:16:39.432 15:03:55 -- common/autotest_common.sh@10 -- # set +x 00:16:39.432 15:03:55 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:16:39.432 15:03:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:39.432 15:03:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:39.432 15:03:55 -- common/autotest_common.sh@10 -- # set +x 00:16:39.690 ************************************ 00:16:39.690 START TEST app_cmdline 00:16:39.690 ************************************ 00:16:39.690 15:03:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:16:39.690 * Looking for test storage... 00:16:39.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:16:39.690 15:03:55 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:16:39.690 15:03:55 -- app/cmdline.sh@17 -- # spdk_tgt_pid=64408 00:16:39.691 15:03:55 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:16:39.691 15:03:55 -- app/cmdline.sh@18 -- # waitforlisten 64408 00:16:39.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.691 15:03:55 -- common/autotest_common.sh@817 -- # '[' -z 64408 ']' 00:16:39.691 15:03:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.691 15:03:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:39.691 15:03:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.691 15:03:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:39.691 15:03:55 -- common/autotest_common.sh@10 -- # set +x 00:16:39.949 [2024-04-18 15:03:55.414741] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:39.949 [2024-04-18 15:03:55.414833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64408 ] 00:16:39.949 [2024-04-18 15:03:55.556909] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.949 [2024-04-18 15:03:55.638082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.885 15:03:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:40.885 15:03:56 -- common/autotest_common.sh@850 -- # return 0 00:16:40.885 15:03:56 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:16:40.885 { 00:16:40.885 "fields": { 00:16:40.885 "commit": "ce34c7fd8", 00:16:40.885 "major": 24, 00:16:40.885 "minor": 5, 00:16:40.885 "patch": 0, 00:16:40.885 "suffix": "-pre" 00:16:40.885 }, 00:16:40.885 "version": "SPDK v24.05-pre git sha1 ce34c7fd8" 00:16:40.885 } 00:16:40.885 15:03:56 -- app/cmdline.sh@22 -- # expected_methods=() 00:16:40.885 15:03:56 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:16:40.885 15:03:56 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:16:40.885 15:03:56 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:16:40.885 15:03:56 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:16:40.885 15:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.885 15:03:56 -- common/autotest_common.sh@10 -- # set +x 00:16:40.885 15:03:56 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:16:40.885 15:03:56 -- app/cmdline.sh@26 -- # sort 00:16:40.885 15:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.885 15:03:56 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:16:40.885 15:03:56 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:16:40.885 15:03:56 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:40.885 15:03:56 -- common/autotest_common.sh@638 -- # local es=0 00:16:40.885 15:03:56 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:40.885 15:03:56 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.885 15:03:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:40.885 15:03:56 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.885 15:03:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:40.885 15:03:56 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.885 15:03:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:40.885 15:03:56 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.885 15:03:56 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:40.885 15:03:56 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:41.144 2024/04/18 15:03:56 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:16:41.144 request: 00:16:41.144 { 00:16:41.144 "method": "env_dpdk_get_mem_stats", 00:16:41.144 "params": {} 00:16:41.144 } 00:16:41.144 Got JSON-RPC error response 00:16:41.144 GoRPCClient: error on JSON-RPC call 00:16:41.144 15:03:56 -- common/autotest_common.sh@641 -- # es=1 00:16:41.144 15:03:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:41.144 15:03:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:41.144 15:03:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:41.144 15:03:56 -- app/cmdline.sh@1 -- # killprocess 64408 00:16:41.144 15:03:56 -- common/autotest_common.sh@936 -- # '[' -z 64408 ']' 00:16:41.144 15:03:56 -- common/autotest_common.sh@940 -- # kill -0 64408 00:16:41.144 15:03:56 -- common/autotest_common.sh@941 -- # uname 00:16:41.144 15:03:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:41.144 15:03:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64408 00:16:41.144 15:03:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:41.144 15:03:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:41.144 killing process with pid 64408 00:16:41.144 15:03:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64408' 00:16:41.144 15:03:56 -- common/autotest_common.sh@955 -- # kill 64408 00:16:41.144 15:03:56 -- common/autotest_common.sh@960 -- # wait 64408 00:16:41.402 00:16:41.402 real 0m1.863s 00:16:41.402 user 0m2.100s 00:16:41.402 sys 0m0.538s 00:16:41.402 15:03:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:41.402 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:16:41.402 ************************************ 00:16:41.402 END TEST app_cmdline 00:16:41.402 ************************************ 00:16:41.746 15:03:57 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:16:41.746 15:03:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:41.746 15:03:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:41.746 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:16:41.746 ************************************ 00:16:41.746 START TEST version 00:16:41.746 ************************************ 00:16:41.746 15:03:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:16:41.746 * Looking for test storage... 00:16:41.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:16:41.746 15:03:57 -- app/version.sh@17 -- # get_header_version major 00:16:41.746 15:03:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:41.746 15:03:57 -- app/version.sh@14 -- # cut -f2 00:16:41.746 15:03:57 -- app/version.sh@14 -- # tr -d '"' 00:16:41.746 15:03:57 -- app/version.sh@17 -- # major=24 00:16:41.746 15:03:57 -- app/version.sh@18 -- # get_header_version minor 00:16:41.746 15:03:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:41.746 15:03:57 -- app/version.sh@14 -- # cut -f2 00:16:41.746 15:03:57 -- app/version.sh@14 -- # tr -d '"' 00:16:41.746 15:03:57 -- app/version.sh@18 -- # minor=5 00:16:41.746 15:03:57 -- app/version.sh@19 -- # get_header_version patch 00:16:41.746 15:03:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:41.746 15:03:57 -- app/version.sh@14 -- # tr -d '"' 00:16:41.746 15:03:57 -- app/version.sh@14 -- # cut -f2 00:16:41.746 15:03:57 -- app/version.sh@19 -- # patch=0 00:16:41.746 15:03:57 -- app/version.sh@20 -- # get_header_version suffix 00:16:41.746 15:03:57 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:41.746 15:03:57 -- app/version.sh@14 -- # cut -f2 00:16:41.746 15:03:57 -- app/version.sh@14 -- # tr -d '"' 00:16:41.746 15:03:57 -- app/version.sh@20 -- # suffix=-pre 00:16:41.746 15:03:57 -- app/version.sh@22 -- # version=24.5 00:16:41.746 15:03:57 -- app/version.sh@25 -- # (( patch != 0 )) 00:16:41.746 15:03:57 -- app/version.sh@28 -- # version=24.5rc0 00:16:41.746 15:03:57 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:41.746 15:03:57 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:16:41.746 15:03:57 -- app/version.sh@30 -- # py_version=24.5rc0 00:16:41.746 15:03:57 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:16:41.746 00:16:41.746 real 0m0.187s 00:16:41.746 user 0m0.105s 00:16:41.746 sys 0m0.118s 00:16:41.746 15:03:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:41.746 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:16:41.746 ************************************ 00:16:41.746 END TEST version 00:16:41.746 ************************************ 00:16:41.746 15:03:57 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:16:41.746 15:03:57 -- spdk/autotest.sh@194 -- # uname -s 00:16:41.746 15:03:57 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:16:41.746 15:03:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:42.004 15:03:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:42.004 15:03:57 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:16:42.004 15:03:57 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:16:42.004 15:03:57 -- spdk/autotest.sh@258 -- # timing_exit lib 00:16:42.004 15:03:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:42.004 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:16:42.004 15:03:57 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:16:42.004 15:03:57 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:16:42.004 15:03:57 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:16:42.004 15:03:57 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:16:42.004 15:03:57 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:16:42.004 15:03:57 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:16:42.004 15:03:57 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:16:42.004 15:03:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:42.004 15:03:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.004 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:16:42.004 ************************************ 00:16:42.004 START TEST nvmf_tcp 00:16:42.004 ************************************ 00:16:42.004 15:03:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:16:42.004 * Looking for test storage... 00:16:42.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:42.004 15:03:57 -- nvmf/nvmf.sh@10 -- # uname -s 00:16:42.004 15:03:57 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:16:42.004 15:03:57 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:42.004 15:03:57 -- nvmf/common.sh@7 -- # uname -s 00:16:42.004 15:03:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.004 15:03:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.004 15:03:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.004 15:03:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.004 15:03:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.004 15:03:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.004 15:03:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.004 15:03:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.004 15:03:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.004 15:03:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.004 15:03:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:16:42.004 15:03:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:16:42.004 15:03:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.004 15:03:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.004 15:03:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:42.263 15:03:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.263 15:03:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.263 15:03:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.263 15:03:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.263 15:03:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.263 15:03:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.263 15:03:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.264 15:03:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.264 15:03:57 -- paths/export.sh@5 -- # export PATH 00:16:42.264 15:03:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.264 15:03:57 -- nvmf/common.sh@47 -- # : 0 00:16:42.264 15:03:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.264 15:03:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.264 15:03:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.264 15:03:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.264 15:03:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.264 15:03:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.264 15:03:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.264 15:03:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.264 15:03:57 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:42.264 15:03:57 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:16:42.264 15:03:57 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:16:42.264 15:03:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:42.264 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:16:42.264 15:03:57 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:16:42.264 15:03:57 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:16:42.264 15:03:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:42.264 15:03:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.264 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:16:42.264 ************************************ 00:16:42.264 START TEST nvmf_example 00:16:42.264 ************************************ 00:16:42.264 15:03:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:16:42.264 * Looking for test storage... 00:16:42.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:42.264 15:03:57 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:42.264 15:03:57 -- nvmf/common.sh@7 -- # uname -s 00:16:42.264 15:03:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.264 15:03:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.264 15:03:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.264 15:03:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.264 15:03:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.264 15:03:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.264 15:03:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.264 15:03:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.264 15:03:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.264 15:03:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.264 15:03:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:16:42.264 15:03:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:16:42.264 15:03:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.264 15:03:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.264 15:03:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:42.264 15:03:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.264 15:03:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.264 15:03:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.264 15:03:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.264 15:03:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.264 15:03:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.264 15:03:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.264 15:03:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.264 15:03:57 -- paths/export.sh@5 -- # export PATH 00:16:42.264 15:03:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.264 15:03:57 -- nvmf/common.sh@47 -- # : 0 00:16:42.264 15:03:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.264 15:03:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.264 15:03:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.264 15:03:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.264 15:03:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.264 15:03:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.264 15:03:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.264 15:03:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.264 15:03:57 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:16:42.264 15:03:57 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:16:42.264 15:03:57 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:16:42.264 15:03:57 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:16:42.264 15:03:57 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:16:42.264 15:03:57 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:16:42.264 15:03:57 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:16:42.264 15:03:57 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:16:42.264 15:03:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:42.265 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:16:42.523 15:03:57 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:16:42.523 15:03:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:42.523 15:03:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.523 15:03:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:42.523 15:03:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:42.523 15:03:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:42.523 15:03:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.523 15:03:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.523 15:03:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.523 15:03:57 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:42.523 15:03:57 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:42.523 15:03:57 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:42.523 15:03:57 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:42.523 15:03:57 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:42.523 15:03:57 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:42.523 15:03:57 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.523 15:03:57 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.523 15:03:57 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:42.523 15:03:57 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:42.523 15:03:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:42.523 15:03:57 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:42.523 15:03:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:42.523 15:03:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.523 15:03:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:42.523 15:03:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:42.523 15:03:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:42.523 15:03:57 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:42.523 15:03:57 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:42.523 Cannot find device "nvmf_init_br" 00:16:42.523 15:03:57 -- nvmf/common.sh@154 -- # true 00:16:42.523 15:03:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:42.523 Cannot find device "nvmf_tgt_br" 00:16:42.523 15:03:58 -- nvmf/common.sh@155 -- # true 00:16:42.523 15:03:58 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:42.523 Cannot find device "nvmf_tgt_br2" 00:16:42.523 15:03:58 -- nvmf/common.sh@156 -- # true 00:16:42.523 15:03:58 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:42.523 Cannot find device "nvmf_init_br" 00:16:42.523 15:03:58 -- nvmf/common.sh@157 -- # true 00:16:42.523 15:03:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:42.523 Cannot find device "nvmf_tgt_br" 00:16:42.523 15:03:58 -- nvmf/common.sh@158 -- # true 00:16:42.523 15:03:58 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:42.523 Cannot find device "nvmf_tgt_br2" 00:16:42.523 15:03:58 -- nvmf/common.sh@159 -- # true 00:16:42.523 15:03:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:42.523 Cannot find device "nvmf_br" 00:16:42.523 15:03:58 -- nvmf/common.sh@160 -- # true 00:16:42.523 15:03:58 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:42.523 Cannot find device "nvmf_init_if" 00:16:42.523 15:03:58 -- nvmf/common.sh@161 -- # true 00:16:42.523 15:03:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:42.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.523 15:03:58 -- nvmf/common.sh@162 -- # true 00:16:42.523 15:03:58 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:42.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.524 15:03:58 -- nvmf/common.sh@163 -- # true 00:16:42.524 15:03:58 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:42.524 15:03:58 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:42.524 15:03:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:42.524 15:03:58 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:42.524 15:03:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:42.524 15:03:58 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:42.524 15:03:58 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:42.781 15:03:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:42.781 15:03:58 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:42.781 15:03:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:42.781 15:03:58 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:42.781 15:03:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:42.781 15:03:58 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:42.781 15:03:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:42.781 15:03:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:42.781 15:03:58 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:42.781 15:03:58 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:42.781 15:03:58 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:42.781 15:03:58 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:42.781 15:03:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:42.781 15:03:58 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:42.781 15:03:58 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:42.781 15:03:58 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:42.781 15:03:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:42.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:16:42.781 00:16:42.781 --- 10.0.0.2 ping statistics --- 00:16:42.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.782 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:16:42.782 15:03:58 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:42.782 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:42.782 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.131 ms 00:16:42.782 00:16:42.782 --- 10.0.0.3 ping statistics --- 00:16:42.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.782 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:16:42.782 15:03:58 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:42.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:16:42.782 00:16:42.782 --- 10.0.0.1 ping statistics --- 00:16:42.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.782 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:42.782 15:03:58 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.782 15:03:58 -- nvmf/common.sh@422 -- # return 0 00:16:42.782 15:03:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:42.782 15:03:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.782 15:03:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:42.782 15:03:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:42.782 15:03:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.782 15:03:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:42.782 15:03:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:42.782 15:03:58 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:16:42.782 15:03:58 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:16:42.782 15:03:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:42.782 15:03:58 -- common/autotest_common.sh@10 -- # set +x 00:16:42.782 15:03:58 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:16:42.782 15:03:58 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:16:42.782 15:03:58 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:16:42.782 15:03:58 -- target/nvmf_example.sh@34 -- # nvmfpid=64784 00:16:42.782 15:03:58 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:42.782 15:03:58 -- target/nvmf_example.sh@36 -- # waitforlisten 64784 00:16:42.782 15:03:58 -- common/autotest_common.sh@817 -- # '[' -z 64784 ']' 00:16:42.782 15:03:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.782 15:03:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:42.782 15:03:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.782 15:03:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:42.782 15:03:58 -- common/autotest_common.sh@10 -- # set +x 00:16:43.719 15:03:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:43.719 15:03:59 -- common/autotest_common.sh@850 -- # return 0 00:16:43.719 15:03:59 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:16:43.719 15:03:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:43.719 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:16:43.719 15:03:59 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:43.719 15:03:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.719 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:16:43.719 15:03:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.719 15:03:59 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:16:43.719 15:03:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.719 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:16:43.978 15:03:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.978 15:03:59 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:16:43.978 15:03:59 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:43.979 15:03:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.979 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:16:43.979 15:03:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.979 15:03:59 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:16:43.979 15:03:59 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:43.979 15:03:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.979 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:16:43.979 15:03:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.979 15:03:59 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.979 15:03:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:43.979 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:16:43.979 15:03:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:43.979 15:03:59 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:43.979 15:03:59 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:56.181 Initializing NVMe Controllers 00:16:56.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:56.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:56.181 Initialization complete. Launching workers. 00:16:56.181 ======================================================== 00:16:56.181 Latency(us) 00:16:56.181 Device Information : IOPS MiB/s Average min max 00:16:56.181 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18256.43 71.31 3505.48 596.57 24029.37 00:16:56.181 ======================================================== 00:16:56.181 Total : 18256.43 71.31 3505.48 596.57 24029.37 00:16:56.181 00:16:56.181 15:04:09 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:16:56.181 15:04:09 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:16:56.181 15:04:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:56.181 15:04:09 -- nvmf/common.sh@117 -- # sync 00:16:56.181 15:04:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:56.181 15:04:09 -- nvmf/common.sh@120 -- # set +e 00:16:56.181 15:04:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:56.181 15:04:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:56.181 rmmod nvme_tcp 00:16:56.181 rmmod nvme_fabrics 00:16:56.181 rmmod nvme_keyring 00:16:56.181 15:04:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:56.181 15:04:09 -- nvmf/common.sh@124 -- # set -e 00:16:56.181 15:04:09 -- nvmf/common.sh@125 -- # return 0 00:16:56.181 15:04:09 -- nvmf/common.sh@478 -- # '[' -n 64784 ']' 00:16:56.181 15:04:09 -- nvmf/common.sh@479 -- # killprocess 64784 00:16:56.181 15:04:09 -- common/autotest_common.sh@936 -- # '[' -z 64784 ']' 00:16:56.181 15:04:09 -- common/autotest_common.sh@940 -- # kill -0 64784 00:16:56.181 15:04:09 -- common/autotest_common.sh@941 -- # uname 00:16:56.181 15:04:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.181 15:04:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64784 00:16:56.181 15:04:09 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:16:56.181 15:04:09 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:16:56.181 killing process with pid 64784 00:16:56.182 15:04:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64784' 00:16:56.182 15:04:09 -- common/autotest_common.sh@955 -- # kill 64784 00:16:56.182 15:04:09 -- common/autotest_common.sh@960 -- # wait 64784 00:16:56.182 nvmf threads initialize successfully 00:16:56.182 bdev subsystem init successfully 00:16:56.182 created a nvmf target service 00:16:56.182 create targets's poll groups done 00:16:56.182 all subsystems of target started 00:16:56.182 nvmf target is running 00:16:56.182 all subsystems of target stopped 00:16:56.182 destroy targets's poll groups done 00:16:56.182 destroyed the nvmf target service 00:16:56.182 bdev subsystem finish successfully 00:16:56.182 nvmf threads destroy successfully 00:16:56.182 15:04:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:56.182 15:04:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:56.182 15:04:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:56.182 15:04:10 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.182 15:04:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:56.182 15:04:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.182 15:04:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.182 15:04:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.182 15:04:10 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:56.182 15:04:10 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:16:56.182 15:04:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:56.182 15:04:10 -- common/autotest_common.sh@10 -- # set +x 00:16:56.182 00:16:56.182 real 0m12.396s 00:16:56.182 user 0m43.543s 00:16:56.182 sys 0m2.389s 00:16:56.182 15:04:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:56.182 15:04:10 -- common/autotest_common.sh@10 -- # set +x 00:16:56.182 ************************************ 00:16:56.182 END TEST nvmf_example 00:16:56.182 ************************************ 00:16:56.182 15:04:10 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:56.182 15:04:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:56.182 15:04:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:56.182 15:04:10 -- common/autotest_common.sh@10 -- # set +x 00:16:56.182 ************************************ 00:16:56.182 START TEST nvmf_filesystem 00:16:56.182 ************************************ 00:16:56.182 15:04:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:56.182 * Looking for test storage... 00:16:56.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:56.182 15:04:10 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:56.182 15:04:10 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:56.182 15:04:10 -- common/autotest_common.sh@34 -- # set -e 00:16:56.182 15:04:10 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:56.182 15:04:10 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:56.182 15:04:10 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:56.182 15:04:10 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:56.182 15:04:10 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:56.182 15:04:10 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:56.182 15:04:10 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:16:56.182 15:04:10 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:56.182 15:04:10 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:56.182 15:04:10 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:16:56.182 15:04:10 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:56.182 15:04:10 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:56.182 15:04:10 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:56.182 15:04:10 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:56.182 15:04:10 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:56.182 15:04:10 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:56.182 15:04:10 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:56.182 15:04:10 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:56.182 15:04:10 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:56.182 15:04:10 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:56.182 15:04:10 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:56.182 15:04:10 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:16:56.182 15:04:10 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:56.182 15:04:10 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:56.182 15:04:10 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:16:56.182 15:04:10 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:16:56.182 15:04:10 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:16:56.182 15:04:10 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:56.182 15:04:10 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:16:56.182 15:04:10 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:16:56.182 15:04:10 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:56.182 15:04:10 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:56.182 15:04:10 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:16:56.182 15:04:10 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:16:56.182 15:04:10 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:16:56.182 15:04:10 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:16:56.182 15:04:10 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:16:56.182 15:04:10 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:16:56.182 15:04:10 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:16:56.182 15:04:10 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:16:56.182 15:04:10 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:56.182 15:04:10 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:16:56.182 15:04:10 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:16:56.182 15:04:10 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:16:56.182 15:04:10 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:16:56.182 15:04:10 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:16:56.182 15:04:10 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:16:56.182 15:04:10 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:16:56.182 15:04:10 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:56.182 15:04:10 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:16:56.182 15:04:10 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:16:56.182 15:04:10 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:16:56.182 15:04:10 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:56.182 15:04:10 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:16:56.182 15:04:10 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:16:56.182 15:04:10 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:16:56.182 15:04:10 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:16:56.182 15:04:10 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:16:56.182 15:04:10 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:16:56.182 15:04:10 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:16:56.182 15:04:10 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:16:56.182 15:04:10 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:16:56.182 15:04:10 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:16:56.182 15:04:10 -- common/build_config.sh@59 -- # CONFIG_GOLANG=y 00:16:56.182 15:04:10 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:16:56.182 15:04:10 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:16:56.182 15:04:10 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:16:56.182 15:04:10 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:16:56.182 15:04:10 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:16:56.182 15:04:10 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:16:56.182 15:04:10 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:16:56.182 15:04:10 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:16:56.182 15:04:10 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:56.182 15:04:10 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:16:56.182 15:04:10 -- common/build_config.sh@70 -- # CONFIG_AVAHI=y 00:16:56.182 15:04:10 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:16:56.182 15:04:10 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:16:56.182 15:04:10 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:16:56.182 15:04:10 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:16:56.182 15:04:10 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:16:56.182 15:04:10 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:16:56.182 15:04:10 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:16:56.182 15:04:10 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:16:56.182 15:04:10 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:16:56.182 15:04:10 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:56.182 15:04:10 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:16:56.182 15:04:10 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:16:56.182 15:04:10 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:56.182 15:04:10 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:56.182 15:04:10 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:56.182 15:04:10 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:56.182 15:04:10 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:16:56.182 15:04:10 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:56.182 15:04:10 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:56.182 15:04:10 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:56.182 15:04:10 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:56.182 15:04:10 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:56.182 15:04:10 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:56.182 15:04:10 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:56.182 15:04:10 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:56.182 15:04:10 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:56.182 15:04:10 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:56.182 15:04:10 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:56.182 #define SPDK_CONFIG_H 00:16:56.182 #define SPDK_CONFIG_APPS 1 00:16:56.182 #define SPDK_CONFIG_ARCH native 00:16:56.182 #undef SPDK_CONFIG_ASAN 00:16:56.182 #define SPDK_CONFIG_AVAHI 1 00:16:56.182 #undef SPDK_CONFIG_CET 00:16:56.182 #define SPDK_CONFIG_COVERAGE 1 00:16:56.183 #define SPDK_CONFIG_CROSS_PREFIX 00:16:56.183 #undef SPDK_CONFIG_CRYPTO 00:16:56.183 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:56.183 #undef SPDK_CONFIG_CUSTOMOCF 00:16:56.183 #undef SPDK_CONFIG_DAOS 00:16:56.183 #define SPDK_CONFIG_DAOS_DIR 00:16:56.183 #define SPDK_CONFIG_DEBUG 1 00:16:56.183 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:56.183 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:56.183 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:56.183 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:56.183 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:56.183 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:56.183 #define SPDK_CONFIG_EXAMPLES 1 00:16:56.183 #undef SPDK_CONFIG_FC 00:16:56.183 #define SPDK_CONFIG_FC_PATH 00:16:56.183 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:56.183 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:56.183 #undef SPDK_CONFIG_FUSE 00:16:56.183 #undef SPDK_CONFIG_FUZZER 00:16:56.183 #define SPDK_CONFIG_FUZZER_LIB 00:16:56.183 #define SPDK_CONFIG_GOLANG 1 00:16:56.183 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:56.183 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:56.183 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:56.183 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:16:56.183 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:56.183 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:56.183 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:56.183 #define SPDK_CONFIG_IDXD 1 00:16:56.183 #undef SPDK_CONFIG_IDXD_KERNEL 00:16:56.183 #undef SPDK_CONFIG_IPSEC_MB 00:16:56.183 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:56.183 #define SPDK_CONFIG_ISAL 1 00:16:56.183 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:56.183 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:56.183 #define SPDK_CONFIG_LIBDIR 00:16:56.183 #undef SPDK_CONFIG_LTO 00:16:56.183 #define SPDK_CONFIG_MAX_LCORES 00:16:56.183 #define SPDK_CONFIG_NVME_CUSE 1 00:16:56.183 #undef SPDK_CONFIG_OCF 00:16:56.183 #define SPDK_CONFIG_OCF_PATH 00:16:56.183 #define SPDK_CONFIG_OPENSSL_PATH 00:16:56.183 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:56.183 #define SPDK_CONFIG_PGO_DIR 00:16:56.183 #undef SPDK_CONFIG_PGO_USE 00:16:56.183 #define SPDK_CONFIG_PREFIX /usr/local 00:16:56.183 #undef SPDK_CONFIG_RAID5F 00:16:56.183 #undef SPDK_CONFIG_RBD 00:16:56.183 #define SPDK_CONFIG_RDMA 1 00:16:56.183 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:56.183 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:56.183 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:56.183 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:56.183 #define SPDK_CONFIG_SHARED 1 00:16:56.183 #undef SPDK_CONFIG_SMA 00:16:56.183 #define SPDK_CONFIG_TESTS 1 00:16:56.183 #undef SPDK_CONFIG_TSAN 00:16:56.183 #define SPDK_CONFIG_UBLK 1 00:16:56.183 #define SPDK_CONFIG_UBSAN 1 00:16:56.183 #undef SPDK_CONFIG_UNIT_TESTS 00:16:56.183 #undef SPDK_CONFIG_URING 00:16:56.183 #define SPDK_CONFIG_URING_PATH 00:16:56.183 #undef SPDK_CONFIG_URING_ZNS 00:16:56.183 #define SPDK_CONFIG_USDT 1 00:16:56.183 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:56.183 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:56.183 #undef SPDK_CONFIG_VFIO_USER 00:16:56.183 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:56.183 #define SPDK_CONFIG_VHOST 1 00:16:56.183 #define SPDK_CONFIG_VIRTIO 1 00:16:56.183 #undef SPDK_CONFIG_VTUNE 00:16:56.183 #define SPDK_CONFIG_VTUNE_DIR 00:16:56.183 #define SPDK_CONFIG_WERROR 1 00:16:56.183 #define SPDK_CONFIG_WPDK_DIR 00:16:56.183 #undef SPDK_CONFIG_XNVME 00:16:56.183 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:56.183 15:04:10 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:56.183 15:04:10 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:56.183 15:04:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.183 15:04:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.183 15:04:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.183 15:04:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.183 15:04:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.183 15:04:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.183 15:04:10 -- paths/export.sh@5 -- # export PATH 00:16:56.183 15:04:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.183 15:04:10 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:56.183 15:04:10 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:56.183 15:04:10 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:56.183 15:04:10 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:56.183 15:04:10 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:56.183 15:04:10 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:56.183 15:04:10 -- pm/common@67 -- # TEST_TAG=N/A 00:16:56.183 15:04:10 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:56.183 15:04:10 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:16:56.183 15:04:10 -- pm/common@71 -- # uname -s 00:16:56.183 15:04:10 -- pm/common@71 -- # PM_OS=Linux 00:16:56.183 15:04:10 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:56.183 15:04:10 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:16:56.183 15:04:10 -- pm/common@76 -- # [[ Linux == Linux ]] 00:16:56.183 15:04:10 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:16:56.183 15:04:10 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:16:56.183 15:04:10 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:16:56.183 15:04:10 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:16:56.183 15:04:10 -- common/autotest_common.sh@57 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:16:56.183 15:04:10 -- common/autotest_common.sh@61 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:56.183 15:04:10 -- common/autotest_common.sh@63 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:16:56.183 15:04:10 -- common/autotest_common.sh@65 -- # : 1 00:16:56.183 15:04:10 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:56.183 15:04:10 -- common/autotest_common.sh@67 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:16:56.183 15:04:10 -- common/autotest_common.sh@69 -- # : 00:16:56.183 15:04:10 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:16:56.183 15:04:10 -- common/autotest_common.sh@71 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:16:56.183 15:04:10 -- common/autotest_common.sh@73 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:16:56.183 15:04:10 -- common/autotest_common.sh@75 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:16:56.183 15:04:10 -- common/autotest_common.sh@77 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:56.183 15:04:10 -- common/autotest_common.sh@79 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:16:56.183 15:04:10 -- common/autotest_common.sh@81 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:16:56.183 15:04:10 -- common/autotest_common.sh@83 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:16:56.183 15:04:10 -- common/autotest_common.sh@85 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:16:56.183 15:04:10 -- common/autotest_common.sh@87 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:16:56.183 15:04:10 -- common/autotest_common.sh@89 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:16:56.183 15:04:10 -- common/autotest_common.sh@91 -- # : 1 00:16:56.183 15:04:10 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:16:56.183 15:04:10 -- common/autotest_common.sh@93 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:16:56.183 15:04:10 -- common/autotest_common.sh@95 -- # : 0 00:16:56.183 15:04:10 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:56.184 15:04:10 -- common/autotest_common.sh@97 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:16:56.184 15:04:10 -- common/autotest_common.sh@99 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:16:56.184 15:04:10 -- common/autotest_common.sh@101 -- # : tcp 00:16:56.184 15:04:10 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:56.184 15:04:10 -- common/autotest_common.sh@103 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:16:56.184 15:04:10 -- common/autotest_common.sh@105 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:16:56.184 15:04:10 -- common/autotest_common.sh@107 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:16:56.184 15:04:10 -- common/autotest_common.sh@109 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:16:56.184 15:04:10 -- common/autotest_common.sh@111 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:16:56.184 15:04:10 -- common/autotest_common.sh@113 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:16:56.184 15:04:10 -- common/autotest_common.sh@115 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:16:56.184 15:04:10 -- common/autotest_common.sh@117 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:56.184 15:04:10 -- common/autotest_common.sh@119 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:16:56.184 15:04:10 -- common/autotest_common.sh@121 -- # : 1 00:16:56.184 15:04:10 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:16:56.184 15:04:10 -- common/autotest_common.sh@123 -- # : 00:16:56.184 15:04:10 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:56.184 15:04:10 -- common/autotest_common.sh@125 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:16:56.184 15:04:10 -- common/autotest_common.sh@127 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:16:56.184 15:04:10 -- common/autotest_common.sh@129 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:16:56.184 15:04:10 -- common/autotest_common.sh@131 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:16:56.184 15:04:10 -- common/autotest_common.sh@133 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:16:56.184 15:04:10 -- common/autotest_common.sh@135 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:16:56.184 15:04:10 -- common/autotest_common.sh@137 -- # : 00:16:56.184 15:04:10 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:16:56.184 15:04:10 -- common/autotest_common.sh@139 -- # : true 00:16:56.184 15:04:10 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:16:56.184 15:04:10 -- common/autotest_common.sh@141 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:16:56.184 15:04:10 -- common/autotest_common.sh@143 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:16:56.184 15:04:10 -- common/autotest_common.sh@145 -- # : 1 00:16:56.184 15:04:10 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:16:56.184 15:04:10 -- common/autotest_common.sh@147 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:16:56.184 15:04:10 -- common/autotest_common.sh@149 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:16:56.184 15:04:10 -- common/autotest_common.sh@151 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:16:56.184 15:04:10 -- common/autotest_common.sh@153 -- # : 00:16:56.184 15:04:10 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:16:56.184 15:04:10 -- common/autotest_common.sh@155 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:16:56.184 15:04:10 -- common/autotest_common.sh@157 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:16:56.184 15:04:10 -- common/autotest_common.sh@159 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:16:56.184 15:04:10 -- common/autotest_common.sh@161 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:16:56.184 15:04:10 -- common/autotest_common.sh@163 -- # : 0 00:16:56.184 15:04:10 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:16:56.184 15:04:10 -- common/autotest_common.sh@166 -- # : 00:16:56.184 15:04:10 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:16:56.184 15:04:10 -- common/autotest_common.sh@168 -- # : 1 00:16:56.184 15:04:10 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:16:56.184 15:04:10 -- common/autotest_common.sh@170 -- # : 1 00:16:56.184 15:04:10 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:56.184 15:04:10 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:56.184 15:04:10 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:56.184 15:04:10 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:56.184 15:04:10 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:56.184 15:04:10 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:56.184 15:04:10 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:56.184 15:04:10 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:56.184 15:04:10 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:56.184 15:04:10 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:56.184 15:04:10 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:56.184 15:04:10 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:56.184 15:04:10 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:56.184 15:04:10 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:56.184 15:04:10 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:16:56.184 15:04:10 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:56.184 15:04:10 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:56.184 15:04:10 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:56.184 15:04:10 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:56.184 15:04:10 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:56.184 15:04:10 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:16:56.184 15:04:10 -- common/autotest_common.sh@199 -- # cat 00:16:56.184 15:04:10 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:16:56.184 15:04:10 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:56.184 15:04:10 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:56.184 15:04:10 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:56.184 15:04:10 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:56.184 15:04:10 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:16:56.184 15:04:10 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:16:56.184 15:04:10 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:56.184 15:04:10 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:56.184 15:04:10 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:56.184 15:04:10 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:56.184 15:04:10 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:56.184 15:04:10 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:56.184 15:04:10 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:56.184 15:04:10 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:56.184 15:04:10 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:56.184 15:04:10 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:56.184 15:04:10 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:56.184 15:04:10 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:56.184 15:04:10 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:16:56.184 15:04:10 -- common/autotest_common.sh@252 -- # export valgrind= 00:16:56.184 15:04:10 -- common/autotest_common.sh@252 -- # valgrind= 00:16:56.184 15:04:10 -- common/autotest_common.sh@258 -- # uname -s 00:16:56.184 15:04:10 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:16:56.184 15:04:10 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:16:56.184 15:04:10 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:16:56.184 15:04:10 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:16:56.184 15:04:10 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:16:56.184 15:04:10 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:16:56.184 15:04:10 -- common/autotest_common.sh@268 -- # MAKE=make 00:16:56.185 15:04:10 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:16:56.185 15:04:10 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:16:56.185 15:04:10 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:16:56.185 15:04:10 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:16:56.185 15:04:10 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:16:56.185 15:04:10 -- common/autotest_common.sh@289 -- # for i in "$@" 00:16:56.185 15:04:10 -- common/autotest_common.sh@290 -- # case "$i" in 00:16:56.185 15:04:10 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:16:56.185 15:04:10 -- common/autotest_common.sh@307 -- # [[ -z 65029 ]] 00:16:56.185 15:04:10 -- common/autotest_common.sh@307 -- # kill -0 65029 00:16:56.185 15:04:10 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:16:56.185 15:04:10 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:16:56.185 15:04:10 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:16:56.185 15:04:10 -- common/autotest_common.sh@320 -- # local mount target_dir 00:16:56.185 15:04:10 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:16:56.185 15:04:10 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:16:56.185 15:04:10 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:16:56.185 15:04:10 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:16:56.185 15:04:10 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.cfyf8p 00:16:56.185 15:04:10 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:56.185 15:04:10 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:16:56.185 15:04:10 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:16:56.185 15:04:10 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.cfyf8p/tests/target /tmp/spdk.cfyf8p 00:16:56.185 15:04:10 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:16:56.185 15:04:10 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:56.185 15:04:10 -- common/autotest_common.sh@316 -- # df -T 00:16:56.185 15:04:10 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # mounts["$mount"]=devtmpfs 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # avails["$mount"]=4194304 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4194304 00:16:56.185 15:04:10 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:16:56.185 15:04:10 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # avails["$mount"]=6266613760 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267891712 00:16:56.185 15:04:10 -- common/autotest_common.sh@352 -- # uses["$mount"]=1277952 00:16:56.185 15:04:10 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # avails["$mount"]=2494353408 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # sizes["$mount"]=2507157504 00:16:56.185 15:04:10 -- common/autotest_common.sh@352 -- # uses["$mount"]=12804096 00:16:56.185 15:04:10 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # avails["$mount"]=13812281344 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:16:56.185 15:04:10 -- common/autotest_common.sh@352 -- # uses["$mount"]=5212049408 00:16:56.185 15:04:10 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # avails["$mount"]=13812281344 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:16:56.185 15:04:10 -- common/autotest_common.sh@352 -- # uses["$mount"]=5212049408 00:16:56.185 15:04:10 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda2 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # avails["$mount"]=843546624 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1012768768 00:16:56.185 15:04:10 -- common/autotest_common.sh@352 -- # uses["$mount"]=100016128 00:16:56.185 15:04:10 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # avails["$mount"]=6267760640 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267895808 00:16:56.185 15:04:10 -- common/autotest_common.sh@352 -- # uses["$mount"]=135168 00:16:56.185 15:04:10 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda3 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # avails["$mount"]=92499968 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # sizes["$mount"]=104607744 00:16:56.185 15:04:10 -- common/autotest_common.sh@352 -- # uses["$mount"]=12107776 00:16:56.185 15:04:10 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253572608 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253576704 00:16:56.185 15:04:10 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:16:56.185 15:04:10 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:16:56.185 15:04:10 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # avails["$mount"]=93549805568 00:16:56.185 15:04:10 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:16:56.185 15:04:10 -- common/autotest_common.sh@352 -- # uses["$mount"]=6152974336 00:16:56.185 15:04:10 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:16:56.185 15:04:10 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:16:56.185 * Looking for test storage... 00:16:56.185 15:04:10 -- common/autotest_common.sh@357 -- # local target_space new_size 00:16:56.185 15:04:10 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:16:56.185 15:04:10 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:56.185 15:04:10 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:56.185 15:04:10 -- common/autotest_common.sh@361 -- # mount=/home 00:16:56.185 15:04:10 -- common/autotest_common.sh@363 -- # target_space=13812281344 00:16:56.185 15:04:10 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:16:56.185 15:04:10 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:16:56.185 15:04:10 -- common/autotest_common.sh@369 -- # [[ btrfs == tmpfs ]] 00:16:56.185 15:04:10 -- common/autotest_common.sh@369 -- # [[ btrfs == ramfs ]] 00:16:56.185 15:04:10 -- common/autotest_common.sh@369 -- # [[ /home == / ]] 00:16:56.185 15:04:10 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:56.185 15:04:10 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:56.185 15:04:10 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:56.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:56.185 15:04:10 -- common/autotest_common.sh@378 -- # return 0 00:16:56.185 15:04:10 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:16:56.185 15:04:10 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:16:56.185 15:04:10 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:56.185 15:04:10 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:56.185 15:04:10 -- common/autotest_common.sh@1673 -- # true 00:16:56.185 15:04:10 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:16:56.185 15:04:10 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:16:56.185 15:04:10 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:16:56.185 15:04:10 -- common/autotest_common.sh@27 -- # exec 00:16:56.185 15:04:10 -- common/autotest_common.sh@29 -- # exec 00:16:56.185 15:04:10 -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:56.185 15:04:10 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:56.185 15:04:10 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:56.185 15:04:10 -- common/autotest_common.sh@18 -- # set -x 00:16:56.186 15:04:10 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:56.186 15:04:10 -- nvmf/common.sh@7 -- # uname -s 00:16:56.186 15:04:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.186 15:04:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.186 15:04:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.186 15:04:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.186 15:04:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.186 15:04:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.186 15:04:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.186 15:04:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.186 15:04:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.186 15:04:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.186 15:04:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:16:56.186 15:04:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:16:56.186 15:04:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.186 15:04:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.186 15:04:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:56.186 15:04:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.186 15:04:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:56.186 15:04:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.186 15:04:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.186 15:04:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.186 15:04:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.186 15:04:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.186 15:04:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.186 15:04:10 -- paths/export.sh@5 -- # export PATH 00:16:56.186 15:04:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.186 15:04:10 -- nvmf/common.sh@47 -- # : 0 00:16:56.186 15:04:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.186 15:04:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.186 15:04:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.186 15:04:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.186 15:04:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.186 15:04:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.186 15:04:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.186 15:04:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.186 15:04:10 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:16:56.186 15:04:10 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:56.186 15:04:10 -- target/filesystem.sh@15 -- # nvmftestinit 00:16:56.186 15:04:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:56.186 15:04:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.186 15:04:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:56.186 15:04:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:56.186 15:04:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:56.186 15:04:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.186 15:04:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.186 15:04:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.186 15:04:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:56.186 15:04:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:56.186 15:04:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:56.186 15:04:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:56.186 15:04:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:56.186 15:04:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:56.186 15:04:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.186 15:04:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.186 15:04:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:56.186 15:04:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:56.186 15:04:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:56.186 15:04:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:56.186 15:04:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:56.186 15:04:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.186 15:04:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:56.186 15:04:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:56.186 15:04:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:56.186 15:04:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:56.186 15:04:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:56.186 15:04:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:56.186 Cannot find device "nvmf_tgt_br" 00:16:56.186 15:04:10 -- nvmf/common.sh@155 -- # true 00:16:56.186 15:04:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.186 Cannot find device "nvmf_tgt_br2" 00:16:56.186 15:04:10 -- nvmf/common.sh@156 -- # true 00:16:56.186 15:04:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:56.186 15:04:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:56.186 Cannot find device "nvmf_tgt_br" 00:16:56.186 15:04:10 -- nvmf/common.sh@158 -- # true 00:16:56.186 15:04:10 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:56.186 Cannot find device "nvmf_tgt_br2" 00:16:56.186 15:04:10 -- nvmf/common.sh@159 -- # true 00:16:56.186 15:04:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:56.186 15:04:10 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:56.186 15:04:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.186 15:04:10 -- nvmf/common.sh@162 -- # true 00:16:56.186 15:04:10 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.186 15:04:10 -- nvmf/common.sh@163 -- # true 00:16:56.186 15:04:10 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:56.186 15:04:10 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:56.186 15:04:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:56.186 15:04:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:56.186 15:04:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:56.186 15:04:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:56.186 15:04:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:56.186 15:04:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:56.186 15:04:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:56.186 15:04:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:56.186 15:04:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:56.186 15:04:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:56.186 15:04:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:56.186 15:04:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:56.186 15:04:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:56.186 15:04:11 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:56.187 15:04:11 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:56.187 15:04:11 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:56.187 15:04:11 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:56.187 15:04:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:56.187 15:04:11 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:56.187 15:04:11 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:56.187 15:04:11 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:56.187 15:04:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:56.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:16:56.187 00:16:56.187 --- 10.0.0.2 ping statistics --- 00:16:56.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.187 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:16:56.187 15:04:11 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:56.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:56.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:16:56.187 00:16:56.187 --- 10.0.0.3 ping statistics --- 00:16:56.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.187 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:56.187 15:04:11 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:56.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:16:56.187 00:16:56.187 --- 10.0.0.1 ping statistics --- 00:16:56.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.187 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:56.187 15:04:11 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.187 15:04:11 -- nvmf/common.sh@422 -- # return 0 00:16:56.187 15:04:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:56.187 15:04:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.187 15:04:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:56.187 15:04:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:56.187 15:04:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.187 15:04:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:56.187 15:04:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:56.187 15:04:11 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:16:56.187 15:04:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:56.187 15:04:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:56.187 15:04:11 -- common/autotest_common.sh@10 -- # set +x 00:16:56.187 ************************************ 00:16:56.187 START TEST nvmf_filesystem_no_in_capsule 00:16:56.187 ************************************ 00:16:56.187 15:04:11 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:16:56.187 15:04:11 -- target/filesystem.sh@47 -- # in_capsule=0 00:16:56.187 15:04:11 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:56.187 15:04:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:56.187 15:04:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:56.187 15:04:11 -- common/autotest_common.sh@10 -- # set +x 00:16:56.187 15:04:11 -- nvmf/common.sh@470 -- # nvmfpid=65201 00:16:56.187 15:04:11 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:56.187 15:04:11 -- nvmf/common.sh@471 -- # waitforlisten 65201 00:16:56.187 15:04:11 -- common/autotest_common.sh@817 -- # '[' -z 65201 ']' 00:16:56.187 15:04:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.187 15:04:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:56.187 15:04:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.187 15:04:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:56.187 15:04:11 -- common/autotest_common.sh@10 -- # set +x 00:16:56.187 [2024-04-18 15:04:11.322832] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:16:56.187 [2024-04-18 15:04:11.322890] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.187 [2024-04-18 15:04:11.466525] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:56.187 [2024-04-18 15:04:11.555315] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.187 [2024-04-18 15:04:11.555373] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.187 [2024-04-18 15:04:11.555383] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.187 [2024-04-18 15:04:11.555392] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.187 [2024-04-18 15:04:11.555399] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.187 [2024-04-18 15:04:11.555620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.187 [2024-04-18 15:04:11.555839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.187 [2024-04-18 15:04:11.556615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.187 [2024-04-18 15:04:11.556616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.753 15:04:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:56.753 15:04:12 -- common/autotest_common.sh@850 -- # return 0 00:16:56.753 15:04:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:56.753 15:04:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:56.753 15:04:12 -- common/autotest_common.sh@10 -- # set +x 00:16:56.753 15:04:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.753 15:04:12 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:56.753 15:04:12 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:56.753 15:04:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.753 15:04:12 -- common/autotest_common.sh@10 -- # set +x 00:16:56.753 [2024-04-18 15:04:12.289041] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:56.753 15:04:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.753 15:04:12 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:56.753 15:04:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.753 15:04:12 -- common/autotest_common.sh@10 -- # set +x 00:16:56.753 Malloc1 00:16:56.753 15:04:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.753 15:04:12 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:56.753 15:04:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.753 15:04:12 -- common/autotest_common.sh@10 -- # set +x 00:16:56.753 15:04:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.753 15:04:12 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:56.753 15:04:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.753 15:04:12 -- common/autotest_common.sh@10 -- # set +x 00:16:56.753 15:04:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.753 15:04:12 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.753 15:04:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.753 15:04:12 -- common/autotest_common.sh@10 -- # set +x 00:16:56.753 [2024-04-18 15:04:12.451226] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.753 15:04:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.011 15:04:12 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:57.011 15:04:12 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:16:57.011 15:04:12 -- common/autotest_common.sh@1365 -- # local bdev_info 00:16:57.011 15:04:12 -- common/autotest_common.sh@1366 -- # local bs 00:16:57.011 15:04:12 -- common/autotest_common.sh@1367 -- # local nb 00:16:57.011 15:04:12 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:57.011 15:04:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.011 15:04:12 -- common/autotest_common.sh@10 -- # set +x 00:16:57.011 15:04:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.012 15:04:12 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:16:57.012 { 00:16:57.012 "aliases": [ 00:16:57.012 "765015ef-e7e9-4117-8311-278172f0995d" 00:16:57.012 ], 00:16:57.012 "assigned_rate_limits": { 00:16:57.012 "r_mbytes_per_sec": 0, 00:16:57.012 "rw_ios_per_sec": 0, 00:16:57.012 "rw_mbytes_per_sec": 0, 00:16:57.012 "w_mbytes_per_sec": 0 00:16:57.012 }, 00:16:57.012 "block_size": 512, 00:16:57.012 "claim_type": "exclusive_write", 00:16:57.012 "claimed": true, 00:16:57.012 "driver_specific": {}, 00:16:57.012 "memory_domains": [ 00:16:57.012 { 00:16:57.012 "dma_device_id": "system", 00:16:57.012 "dma_device_type": 1 00:16:57.012 }, 00:16:57.012 { 00:16:57.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.012 "dma_device_type": 2 00:16:57.012 } 00:16:57.012 ], 00:16:57.012 "name": "Malloc1", 00:16:57.012 "num_blocks": 1048576, 00:16:57.012 "product_name": "Malloc disk", 00:16:57.012 "supported_io_types": { 00:16:57.012 "abort": true, 00:16:57.012 "compare": false, 00:16:57.012 "compare_and_write": false, 00:16:57.012 "flush": true, 00:16:57.012 "nvme_admin": false, 00:16:57.012 "nvme_io": false, 00:16:57.012 "read": true, 00:16:57.012 "reset": true, 00:16:57.012 "unmap": true, 00:16:57.012 "write": true, 00:16:57.012 "write_zeroes": true 00:16:57.012 }, 00:16:57.012 "uuid": "765015ef-e7e9-4117-8311-278172f0995d", 00:16:57.012 "zoned": false 00:16:57.012 } 00:16:57.012 ]' 00:16:57.012 15:04:12 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:16:57.012 15:04:12 -- common/autotest_common.sh@1369 -- # bs=512 00:16:57.012 15:04:12 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:16:57.012 15:04:12 -- common/autotest_common.sh@1370 -- # nb=1048576 00:16:57.012 15:04:12 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:16:57.012 15:04:12 -- common/autotest_common.sh@1374 -- # echo 512 00:16:57.012 15:04:12 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:57.012 15:04:12 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:57.270 15:04:12 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:57.270 15:04:12 -- common/autotest_common.sh@1184 -- # local i=0 00:16:57.270 15:04:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:57.270 15:04:12 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:57.270 15:04:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:59.172 15:04:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:59.172 15:04:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:59.172 15:04:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:59.172 15:04:14 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:59.172 15:04:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:59.172 15:04:14 -- common/autotest_common.sh@1194 -- # return 0 00:16:59.172 15:04:14 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:59.172 15:04:14 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:59.172 15:04:14 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:59.172 15:04:14 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:59.172 15:04:14 -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:59.172 15:04:14 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:59.172 15:04:14 -- setup/common.sh@80 -- # echo 536870912 00:16:59.172 15:04:14 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:59.172 15:04:14 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:59.172 15:04:14 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:59.172 15:04:14 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:59.430 15:04:14 -- target/filesystem.sh@69 -- # partprobe 00:16:59.430 15:04:15 -- target/filesystem.sh@70 -- # sleep 1 00:17:00.365 15:04:16 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:17:00.366 15:04:16 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:17:00.366 15:04:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:00.366 15:04:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:00.366 15:04:16 -- common/autotest_common.sh@10 -- # set +x 00:17:00.623 ************************************ 00:17:00.623 START TEST filesystem_ext4 00:17:00.623 ************************************ 00:17:00.623 15:04:16 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:17:00.623 15:04:16 -- target/filesystem.sh@18 -- # fstype=ext4 00:17:00.623 15:04:16 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:00.623 15:04:16 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:17:00.623 15:04:16 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:17:00.623 15:04:16 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:00.623 15:04:16 -- common/autotest_common.sh@914 -- # local i=0 00:17:00.624 15:04:16 -- common/autotest_common.sh@915 -- # local force 00:17:00.624 15:04:16 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:17:00.624 15:04:16 -- common/autotest_common.sh@918 -- # force=-F 00:17:00.624 15:04:16 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:17:00.624 mke2fs 1.46.5 (30-Dec-2021) 00:17:00.624 Discarding device blocks: 0/522240 done 00:17:00.624 Creating filesystem with 522240 1k blocks and 130560 inodes 00:17:00.624 Filesystem UUID: c4b6e886-7fe3-4d3d-82d9-9ce0c98b1c19 00:17:00.624 Superblock backups stored on blocks: 00:17:00.624 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:17:00.624 00:17:00.624 Allocating group tables: 0/64 done 00:17:00.624 Writing inode tables: 0/64 done 00:17:00.624 Creating journal (8192 blocks): done 00:17:00.624 Writing superblocks and filesystem accounting information: 0/64 done 00:17:00.624 00:17:00.624 15:04:16 -- common/autotest_common.sh@931 -- # return 0 00:17:00.624 15:04:16 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:00.883 15:04:16 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:00.883 15:04:16 -- target/filesystem.sh@25 -- # sync 00:17:00.883 15:04:16 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:00.883 15:04:16 -- target/filesystem.sh@27 -- # sync 00:17:00.883 15:04:16 -- target/filesystem.sh@29 -- # i=0 00:17:00.883 15:04:16 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:00.883 15:04:16 -- target/filesystem.sh@37 -- # kill -0 65201 00:17:00.883 15:04:16 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:00.883 15:04:16 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:00.883 15:04:16 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:00.883 15:04:16 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:00.883 00:17:00.883 real 0m0.443s 00:17:00.883 user 0m0.035s 00:17:00.883 sys 0m0.067s 00:17:00.883 15:04:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:00.883 15:04:16 -- common/autotest_common.sh@10 -- # set +x 00:17:00.883 ************************************ 00:17:00.883 END TEST filesystem_ext4 00:17:00.883 ************************************ 00:17:01.141 15:04:16 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:17:01.141 15:04:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:01.141 15:04:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:01.141 15:04:16 -- common/autotest_common.sh@10 -- # set +x 00:17:01.141 ************************************ 00:17:01.141 START TEST filesystem_btrfs 00:17:01.141 ************************************ 00:17:01.141 15:04:16 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:17:01.141 15:04:16 -- target/filesystem.sh@18 -- # fstype=btrfs 00:17:01.141 15:04:16 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:01.142 15:04:16 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:17:01.142 15:04:16 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:17:01.142 15:04:16 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:01.142 15:04:16 -- common/autotest_common.sh@914 -- # local i=0 00:17:01.142 15:04:16 -- common/autotest_common.sh@915 -- # local force 00:17:01.142 15:04:16 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:17:01.142 15:04:16 -- common/autotest_common.sh@920 -- # force=-f 00:17:01.142 15:04:16 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:17:01.401 btrfs-progs v6.6.2 00:17:01.401 See https://btrfs.readthedocs.io for more information. 00:17:01.401 00:17:01.401 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:17:01.401 NOTE: several default settings have changed in version 5.15, please make sure 00:17:01.401 this does not affect your deployments: 00:17:01.401 - DUP for metadata (-m dup) 00:17:01.401 - enabled no-holes (-O no-holes) 00:17:01.401 - enabled free-space-tree (-R free-space-tree) 00:17:01.401 00:17:01.401 Label: (null) 00:17:01.401 UUID: b412ec95-51a5-4a87-9d59-3363dc902e28 00:17:01.401 Node size: 16384 00:17:01.401 Sector size: 4096 00:17:01.401 Filesystem size: 510.00MiB 00:17:01.401 Block group profiles: 00:17:01.401 Data: single 8.00MiB 00:17:01.401 Metadata: DUP 32.00MiB 00:17:01.401 System: DUP 8.00MiB 00:17:01.401 SSD detected: yes 00:17:01.401 Zoned device: no 00:17:01.401 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:17:01.401 Runtime features: free-space-tree 00:17:01.401 Checksum: crc32c 00:17:01.401 Number of devices: 1 00:17:01.401 Devices: 00:17:01.401 ID SIZE PATH 00:17:01.401 1 510.00MiB /dev/nvme0n1p1 00:17:01.401 00:17:01.401 15:04:16 -- common/autotest_common.sh@931 -- # return 0 00:17:01.401 15:04:16 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:01.401 15:04:16 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:01.401 15:04:16 -- target/filesystem.sh@25 -- # sync 00:17:01.401 15:04:16 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:01.401 15:04:16 -- target/filesystem.sh@27 -- # sync 00:17:01.401 15:04:16 -- target/filesystem.sh@29 -- # i=0 00:17:01.401 15:04:16 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:01.401 15:04:16 -- target/filesystem.sh@37 -- # kill -0 65201 00:17:01.401 15:04:16 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:01.401 15:04:16 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:01.401 15:04:16 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:01.401 15:04:16 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:01.401 00:17:01.401 real 0m0.307s 00:17:01.401 user 0m0.032s 00:17:01.401 sys 0m0.083s 00:17:01.401 15:04:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:01.401 15:04:17 -- common/autotest_common.sh@10 -- # set +x 00:17:01.401 ************************************ 00:17:01.401 END TEST filesystem_btrfs 00:17:01.401 ************************************ 00:17:01.401 15:04:17 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:17:01.401 15:04:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:01.401 15:04:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:01.401 15:04:17 -- common/autotest_common.sh@10 -- # set +x 00:17:01.660 ************************************ 00:17:01.660 START TEST filesystem_xfs 00:17:01.660 ************************************ 00:17:01.660 15:04:17 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:17:01.660 15:04:17 -- target/filesystem.sh@18 -- # fstype=xfs 00:17:01.660 15:04:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:01.660 15:04:17 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:17:01.660 15:04:17 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:17:01.660 15:04:17 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:01.660 15:04:17 -- common/autotest_common.sh@914 -- # local i=0 00:17:01.660 15:04:17 -- common/autotest_common.sh@915 -- # local force 00:17:01.660 15:04:17 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:17:01.660 15:04:17 -- common/autotest_common.sh@920 -- # force=-f 00:17:01.660 15:04:17 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:17:01.660 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:17:01.660 = sectsz=512 attr=2, projid32bit=1 00:17:01.660 = crc=1 finobt=1, sparse=1, rmapbt=0 00:17:01.660 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:17:01.660 data = bsize=4096 blocks=130560, imaxpct=25 00:17:01.660 = sunit=0 swidth=0 blks 00:17:01.660 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:17:01.660 log =internal log bsize=4096 blocks=16384, version=2 00:17:01.660 = sectsz=512 sunit=0 blks, lazy-count=1 00:17:01.660 realtime =none extsz=4096 blocks=0, rtextents=0 00:17:02.228 Discarding blocks...Done. 00:17:02.228 15:04:17 -- common/autotest_common.sh@931 -- # return 0 00:17:02.228 15:04:17 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:04.782 15:04:20 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:04.782 15:04:20 -- target/filesystem.sh@25 -- # sync 00:17:04.782 15:04:20 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:04.782 15:04:20 -- target/filesystem.sh@27 -- # sync 00:17:04.782 15:04:20 -- target/filesystem.sh@29 -- # i=0 00:17:04.782 15:04:20 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:04.782 15:04:20 -- target/filesystem.sh@37 -- # kill -0 65201 00:17:04.782 15:04:20 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:04.782 15:04:20 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:04.782 15:04:20 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:04.782 15:04:20 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:04.782 00:17:04.782 real 0m3.111s 00:17:04.782 user 0m0.028s 00:17:04.782 sys 0m0.080s 00:17:04.782 15:04:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:04.782 15:04:20 -- common/autotest_common.sh@10 -- # set +x 00:17:04.782 ************************************ 00:17:04.782 END TEST filesystem_xfs 00:17:04.782 ************************************ 00:17:04.782 15:04:20 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:17:04.782 15:04:20 -- target/filesystem.sh@93 -- # sync 00:17:04.782 15:04:20 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:04.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.782 15:04:20 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:04.782 15:04:20 -- common/autotest_common.sh@1205 -- # local i=0 00:17:04.782 15:04:20 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:04.782 15:04:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.782 15:04:20 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:04.782 15:04:20 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.782 15:04:20 -- common/autotest_common.sh@1217 -- # return 0 00:17:04.782 15:04:20 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.782 15:04:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.782 15:04:20 -- common/autotest_common.sh@10 -- # set +x 00:17:04.782 15:04:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.782 15:04:20 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:04.782 15:04:20 -- target/filesystem.sh@101 -- # killprocess 65201 00:17:04.782 15:04:20 -- common/autotest_common.sh@936 -- # '[' -z 65201 ']' 00:17:04.782 15:04:20 -- common/autotest_common.sh@940 -- # kill -0 65201 00:17:04.782 15:04:20 -- common/autotest_common.sh@941 -- # uname 00:17:04.782 15:04:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:04.782 15:04:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65201 00:17:04.782 15:04:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:04.782 15:04:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:04.782 killing process with pid 65201 00:17:04.782 15:04:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65201' 00:17:04.782 15:04:20 -- common/autotest_common.sh@955 -- # kill 65201 00:17:04.782 15:04:20 -- common/autotest_common.sh@960 -- # wait 65201 00:17:05.351 15:04:20 -- target/filesystem.sh@102 -- # nvmfpid= 00:17:05.351 00:17:05.351 real 0m9.582s 00:17:05.351 user 0m35.926s 00:17:05.351 sys 0m2.342s 00:17:05.351 15:04:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:05.351 15:04:20 -- common/autotest_common.sh@10 -- # set +x 00:17:05.351 ************************************ 00:17:05.351 END TEST nvmf_filesystem_no_in_capsule 00:17:05.351 ************************************ 00:17:05.351 15:04:20 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:17:05.351 15:04:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:05.351 15:04:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:05.351 15:04:20 -- common/autotest_common.sh@10 -- # set +x 00:17:05.351 ************************************ 00:17:05.351 START TEST nvmf_filesystem_in_capsule 00:17:05.351 ************************************ 00:17:05.351 15:04:21 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:17:05.351 15:04:21 -- target/filesystem.sh@47 -- # in_capsule=4096 00:17:05.351 15:04:21 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:17:05.351 15:04:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:05.351 15:04:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:05.351 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:05.351 15:04:21 -- nvmf/common.sh@470 -- # nvmfpid=65534 00:17:05.351 15:04:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:05.351 15:04:21 -- nvmf/common.sh@471 -- # waitforlisten 65534 00:17:05.351 15:04:21 -- common/autotest_common.sh@817 -- # '[' -z 65534 ']' 00:17:05.351 15:04:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.351 15:04:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:05.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.351 15:04:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.351 15:04:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:05.351 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:05.610 [2024-04-18 15:04:21.067183] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:17:05.610 [2024-04-18 15:04:21.067246] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.610 [2024-04-18 15:04:21.210374] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.610 [2024-04-18 15:04:21.285578] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.610 [2024-04-18 15:04:21.285645] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.610 [2024-04-18 15:04:21.285655] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.610 [2024-04-18 15:04:21.285664] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.610 [2024-04-18 15:04:21.285671] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.610 [2024-04-18 15:04:21.285947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.610 [2024-04-18 15:04:21.286164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.610 [2024-04-18 15:04:21.287010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.610 [2024-04-18 15:04:21.287010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.549 15:04:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:06.549 15:04:21 -- common/autotest_common.sh@850 -- # return 0 00:17:06.549 15:04:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:06.549 15:04:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:06.549 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:06.549 15:04:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.549 15:04:21 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:17:06.549 15:04:21 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:17:06.549 15:04:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.549 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:06.549 [2024-04-18 15:04:21.974963] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:06.549 15:04:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.549 15:04:21 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:17:06.549 15:04:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.549 15:04:21 -- common/autotest_common.sh@10 -- # set +x 00:17:06.549 Malloc1 00:17:06.549 15:04:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.549 15:04:22 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:06.549 15:04:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.549 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:06.549 15:04:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.549 15:04:22 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:06.549 15:04:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.549 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:06.549 15:04:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.549 15:04:22 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:06.549 15:04:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.549 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:06.549 [2024-04-18 15:04:22.144569] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:06.549 15:04:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.549 15:04:22 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:17:06.549 15:04:22 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:17:06.549 15:04:22 -- common/autotest_common.sh@1365 -- # local bdev_info 00:17:06.549 15:04:22 -- common/autotest_common.sh@1366 -- # local bs 00:17:06.549 15:04:22 -- common/autotest_common.sh@1367 -- # local nb 00:17:06.549 15:04:22 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:17:06.549 15:04:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.549 15:04:22 -- common/autotest_common.sh@10 -- # set +x 00:17:06.549 15:04:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.549 15:04:22 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:17:06.549 { 00:17:06.549 "aliases": [ 00:17:06.549 "0e14d18c-b45f-46b9-8e54-e379887a78c5" 00:17:06.549 ], 00:17:06.549 "assigned_rate_limits": { 00:17:06.549 "r_mbytes_per_sec": 0, 00:17:06.549 "rw_ios_per_sec": 0, 00:17:06.549 "rw_mbytes_per_sec": 0, 00:17:06.549 "w_mbytes_per_sec": 0 00:17:06.549 }, 00:17:06.549 "block_size": 512, 00:17:06.549 "claim_type": "exclusive_write", 00:17:06.549 "claimed": true, 00:17:06.549 "driver_specific": {}, 00:17:06.549 "memory_domains": [ 00:17:06.549 { 00:17:06.549 "dma_device_id": "system", 00:17:06.549 "dma_device_type": 1 00:17:06.549 }, 00:17:06.549 { 00:17:06.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.549 "dma_device_type": 2 00:17:06.549 } 00:17:06.549 ], 00:17:06.549 "name": "Malloc1", 00:17:06.549 "num_blocks": 1048576, 00:17:06.549 "product_name": "Malloc disk", 00:17:06.549 "supported_io_types": { 00:17:06.549 "abort": true, 00:17:06.549 "compare": false, 00:17:06.549 "compare_and_write": false, 00:17:06.549 "flush": true, 00:17:06.549 "nvme_admin": false, 00:17:06.549 "nvme_io": false, 00:17:06.549 "read": true, 00:17:06.549 "reset": true, 00:17:06.549 "unmap": true, 00:17:06.549 "write": true, 00:17:06.549 "write_zeroes": true 00:17:06.549 }, 00:17:06.549 "uuid": "0e14d18c-b45f-46b9-8e54-e379887a78c5", 00:17:06.549 "zoned": false 00:17:06.549 } 00:17:06.549 ]' 00:17:06.549 15:04:22 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:17:06.549 15:04:22 -- common/autotest_common.sh@1369 -- # bs=512 00:17:06.549 15:04:22 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:17:06.808 15:04:22 -- common/autotest_common.sh@1370 -- # nb=1048576 00:17:06.808 15:04:22 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:17:06.808 15:04:22 -- common/autotest_common.sh@1374 -- # echo 512 00:17:06.808 15:04:22 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:17:06.809 15:04:22 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:06.809 15:04:22 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:17:06.809 15:04:22 -- common/autotest_common.sh@1184 -- # local i=0 00:17:06.809 15:04:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:06.809 15:04:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:06.809 15:04:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:09.343 15:04:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:09.343 15:04:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:09.343 15:04:24 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:09.343 15:04:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:09.343 15:04:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:09.343 15:04:24 -- common/autotest_common.sh@1194 -- # return 0 00:17:09.343 15:04:24 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:17:09.343 15:04:24 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:17:09.343 15:04:24 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:17:09.343 15:04:24 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:17:09.343 15:04:24 -- setup/common.sh@76 -- # local dev=nvme0n1 00:17:09.343 15:04:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:09.343 15:04:24 -- setup/common.sh@80 -- # echo 536870912 00:17:09.343 15:04:24 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:17:09.343 15:04:24 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:17:09.343 15:04:24 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:17:09.343 15:04:24 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:17:09.343 15:04:24 -- target/filesystem.sh@69 -- # partprobe 00:17:09.343 15:04:24 -- target/filesystem.sh@70 -- # sleep 1 00:17:10.281 15:04:25 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:17:10.281 15:04:25 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:17:10.281 15:04:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:10.281 15:04:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:10.281 15:04:25 -- common/autotest_common.sh@10 -- # set +x 00:17:10.281 ************************************ 00:17:10.281 START TEST filesystem_in_capsule_ext4 00:17:10.281 ************************************ 00:17:10.281 15:04:25 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:17:10.281 15:04:25 -- target/filesystem.sh@18 -- # fstype=ext4 00:17:10.281 15:04:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:10.281 15:04:25 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:17:10.281 15:04:25 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:17:10.281 15:04:25 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:10.281 15:04:25 -- common/autotest_common.sh@914 -- # local i=0 00:17:10.281 15:04:25 -- common/autotest_common.sh@915 -- # local force 00:17:10.281 15:04:25 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:17:10.281 15:04:25 -- common/autotest_common.sh@918 -- # force=-F 00:17:10.281 15:04:25 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:17:10.281 mke2fs 1.46.5 (30-Dec-2021) 00:17:10.281 Discarding device blocks: 0/522240 done 00:17:10.281 Creating filesystem with 522240 1k blocks and 130560 inodes 00:17:10.281 Filesystem UUID: 5d9a819b-2980-48d3-8f08-dc149ad06e8d 00:17:10.281 Superblock backups stored on blocks: 00:17:10.281 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:17:10.281 00:17:10.281 Allocating group tables: 0/64 done 00:17:10.281 Writing inode tables: 0/64 done 00:17:10.281 Creating journal (8192 blocks): done 00:17:10.281 Writing superblocks and filesystem accounting information: 0/64 done 00:17:10.281 00:17:10.281 15:04:25 -- common/autotest_common.sh@931 -- # return 0 00:17:10.281 15:04:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:10.540 15:04:25 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:10.540 15:04:26 -- target/filesystem.sh@25 -- # sync 00:17:10.540 15:04:26 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:10.540 15:04:26 -- target/filesystem.sh@27 -- # sync 00:17:10.540 15:04:26 -- target/filesystem.sh@29 -- # i=0 00:17:10.540 15:04:26 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:10.540 15:04:26 -- target/filesystem.sh@37 -- # kill -0 65534 00:17:10.540 15:04:26 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:10.540 15:04:26 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:10.540 15:04:26 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:10.540 15:04:26 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:10.540 ************************************ 00:17:10.540 END TEST filesystem_in_capsule_ext4 00:17:10.540 ************************************ 00:17:10.540 00:17:10.540 real 0m0.425s 00:17:10.540 user 0m0.030s 00:17:10.540 sys 0m0.086s 00:17:10.540 15:04:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:10.540 15:04:26 -- common/autotest_common.sh@10 -- # set +x 00:17:10.800 15:04:26 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:17:10.800 15:04:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:10.800 15:04:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:10.800 15:04:26 -- common/autotest_common.sh@10 -- # set +x 00:17:10.800 ************************************ 00:17:10.800 START TEST filesystem_in_capsule_btrfs 00:17:10.800 ************************************ 00:17:10.800 15:04:26 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:17:10.800 15:04:26 -- target/filesystem.sh@18 -- # fstype=btrfs 00:17:10.800 15:04:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:10.800 15:04:26 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:17:10.800 15:04:26 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:17:10.800 15:04:26 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:10.800 15:04:26 -- common/autotest_common.sh@914 -- # local i=0 00:17:10.800 15:04:26 -- common/autotest_common.sh@915 -- # local force 00:17:10.800 15:04:26 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:17:10.800 15:04:26 -- common/autotest_common.sh@920 -- # force=-f 00:17:10.800 15:04:26 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:17:10.800 btrfs-progs v6.6.2 00:17:10.800 See https://btrfs.readthedocs.io for more information. 00:17:10.800 00:17:10.800 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:17:10.800 NOTE: several default settings have changed in version 5.15, please make sure 00:17:10.800 this does not affect your deployments: 00:17:10.800 - DUP for metadata (-m dup) 00:17:10.800 - enabled no-holes (-O no-holes) 00:17:10.800 - enabled free-space-tree (-R free-space-tree) 00:17:10.800 00:17:10.800 Label: (null) 00:17:10.800 UUID: 0dbb9bb6-791f-4b3d-b2a3-7b2488fbfb52 00:17:10.800 Node size: 16384 00:17:10.800 Sector size: 4096 00:17:10.800 Filesystem size: 510.00MiB 00:17:10.800 Block group profiles: 00:17:10.800 Data: single 8.00MiB 00:17:10.800 Metadata: DUP 32.00MiB 00:17:10.800 System: DUP 8.00MiB 00:17:10.800 SSD detected: yes 00:17:10.800 Zoned device: no 00:17:10.800 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:17:10.800 Runtime features: free-space-tree 00:17:10.800 Checksum: crc32c 00:17:10.800 Number of devices: 1 00:17:10.800 Devices: 00:17:10.800 ID SIZE PATH 00:17:10.800 1 510.00MiB /dev/nvme0n1p1 00:17:10.800 00:17:10.800 15:04:26 -- common/autotest_common.sh@931 -- # return 0 00:17:10.800 15:04:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:11.059 15:04:26 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:11.059 15:04:26 -- target/filesystem.sh@25 -- # sync 00:17:11.059 15:04:26 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:11.059 15:04:26 -- target/filesystem.sh@27 -- # sync 00:17:11.059 15:04:26 -- target/filesystem.sh@29 -- # i=0 00:17:11.059 15:04:26 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:11.059 15:04:26 -- target/filesystem.sh@37 -- # kill -0 65534 00:17:11.059 15:04:26 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:11.059 15:04:26 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:11.059 15:04:26 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:11.059 15:04:26 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:11.059 00:17:11.059 real 0m0.239s 00:17:11.059 user 0m0.036s 00:17:11.059 sys 0m0.084s 00:17:11.059 15:04:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:11.059 ************************************ 00:17:11.059 END TEST filesystem_in_capsule_btrfs 00:17:11.059 ************************************ 00:17:11.059 15:04:26 -- common/autotest_common.sh@10 -- # set +x 00:17:11.059 15:04:26 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:17:11.059 15:04:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:11.059 15:04:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:11.059 15:04:26 -- common/autotest_common.sh@10 -- # set +x 00:17:11.059 ************************************ 00:17:11.059 START TEST filesystem_in_capsule_xfs 00:17:11.059 ************************************ 00:17:11.059 15:04:26 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:17:11.059 15:04:26 -- target/filesystem.sh@18 -- # fstype=xfs 00:17:11.059 15:04:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:11.059 15:04:26 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:17:11.059 15:04:26 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:17:11.059 15:04:26 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:11.059 15:04:26 -- common/autotest_common.sh@914 -- # local i=0 00:17:11.059 15:04:26 -- common/autotest_common.sh@915 -- # local force 00:17:11.059 15:04:26 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:17:11.059 15:04:26 -- common/autotest_common.sh@920 -- # force=-f 00:17:11.059 15:04:26 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:17:11.319 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:17:11.319 = sectsz=512 attr=2, projid32bit=1 00:17:11.319 = crc=1 finobt=1, sparse=1, rmapbt=0 00:17:11.319 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:17:11.319 data = bsize=4096 blocks=130560, imaxpct=25 00:17:11.319 = sunit=0 swidth=0 blks 00:17:11.319 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:17:11.319 log =internal log bsize=4096 blocks=16384, version=2 00:17:11.319 = sectsz=512 sunit=0 blks, lazy-count=1 00:17:11.319 realtime =none extsz=4096 blocks=0, rtextents=0 00:17:11.885 Discarding blocks...Done. 00:17:11.885 15:04:27 -- common/autotest_common.sh@931 -- # return 0 00:17:11.885 15:04:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:13.849 15:04:29 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:13.849 15:04:29 -- target/filesystem.sh@25 -- # sync 00:17:13.849 15:04:29 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:13.849 15:04:29 -- target/filesystem.sh@27 -- # sync 00:17:13.849 15:04:29 -- target/filesystem.sh@29 -- # i=0 00:17:13.849 15:04:29 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:13.849 15:04:29 -- target/filesystem.sh@37 -- # kill -0 65534 00:17:13.849 15:04:29 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:13.849 15:04:29 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:13.849 15:04:29 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:13.849 15:04:29 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:13.849 ************************************ 00:17:13.849 END TEST filesystem_in_capsule_xfs 00:17:13.849 ************************************ 00:17:13.849 00:17:13.849 real 0m2.649s 00:17:13.849 user 0m0.032s 00:17:13.849 sys 0m0.085s 00:17:13.849 15:04:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:13.849 15:04:29 -- common/autotest_common.sh@10 -- # set +x 00:17:13.850 15:04:29 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:17:13.850 15:04:29 -- target/filesystem.sh@93 -- # sync 00:17:13.850 15:04:29 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:13.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.850 15:04:29 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:13.850 15:04:29 -- common/autotest_common.sh@1205 -- # local i=0 00:17:13.850 15:04:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:13.850 15:04:29 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:13.850 15:04:29 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:13.850 15:04:29 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:13.850 15:04:29 -- common/autotest_common.sh@1217 -- # return 0 00:17:13.850 15:04:29 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.850 15:04:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.850 15:04:29 -- common/autotest_common.sh@10 -- # set +x 00:17:13.850 15:04:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.850 15:04:29 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:13.850 15:04:29 -- target/filesystem.sh@101 -- # killprocess 65534 00:17:13.850 15:04:29 -- common/autotest_common.sh@936 -- # '[' -z 65534 ']' 00:17:13.850 15:04:29 -- common/autotest_common.sh@940 -- # kill -0 65534 00:17:14.108 15:04:29 -- common/autotest_common.sh@941 -- # uname 00:17:14.108 15:04:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.108 15:04:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65534 00:17:14.108 killing process with pid 65534 00:17:14.108 15:04:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:14.108 15:04:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:14.108 15:04:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65534' 00:17:14.108 15:04:29 -- common/autotest_common.sh@955 -- # kill 65534 00:17:14.108 15:04:29 -- common/autotest_common.sh@960 -- # wait 65534 00:17:14.368 ************************************ 00:17:14.368 END TEST nvmf_filesystem_in_capsule 00:17:14.368 ************************************ 00:17:14.368 15:04:29 -- target/filesystem.sh@102 -- # nvmfpid= 00:17:14.368 00:17:14.368 real 0m8.962s 00:17:14.368 user 0m33.690s 00:17:14.368 sys 0m2.140s 00:17:14.368 15:04:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:14.368 15:04:29 -- common/autotest_common.sh@10 -- # set +x 00:17:14.368 15:04:30 -- target/filesystem.sh@108 -- # nvmftestfini 00:17:14.368 15:04:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:14.368 15:04:30 -- nvmf/common.sh@117 -- # sync 00:17:14.368 15:04:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:14.368 15:04:30 -- nvmf/common.sh@120 -- # set +e 00:17:14.368 15:04:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:14.627 15:04:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:14.627 rmmod nvme_tcp 00:17:14.627 rmmod nvme_fabrics 00:17:14.627 rmmod nvme_keyring 00:17:14.627 15:04:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:14.627 15:04:30 -- nvmf/common.sh@124 -- # set -e 00:17:14.627 15:04:30 -- nvmf/common.sh@125 -- # return 0 00:17:14.627 15:04:30 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:17:14.627 15:04:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:14.627 15:04:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:14.627 15:04:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:14.627 15:04:30 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:14.627 15:04:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:14.627 15:04:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.627 15:04:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.627 15:04:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.627 15:04:30 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:14.627 ************************************ 00:17:14.627 END TEST nvmf_filesystem 00:17:14.627 ************************************ 00:17:14.627 00:17:14.627 real 0m19.825s 00:17:14.627 user 1m10.009s 00:17:14.627 sys 0m5.158s 00:17:14.628 15:04:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:14.628 15:04:30 -- common/autotest_common.sh@10 -- # set +x 00:17:14.628 15:04:30 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:17:14.628 15:04:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:14.628 15:04:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:14.628 15:04:30 -- common/autotest_common.sh@10 -- # set +x 00:17:14.886 ************************************ 00:17:14.886 START TEST nvmf_discovery 00:17:14.886 ************************************ 00:17:14.886 15:04:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:17:14.886 * Looking for test storage... 00:17:14.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:14.886 15:04:30 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:14.886 15:04:30 -- nvmf/common.sh@7 -- # uname -s 00:17:14.886 15:04:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.886 15:04:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.886 15:04:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.886 15:04:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.886 15:04:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.886 15:04:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.886 15:04:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.886 15:04:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.886 15:04:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.886 15:04:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.886 15:04:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:17:14.886 15:04:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:17:14.886 15:04:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.886 15:04:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.886 15:04:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:14.886 15:04:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.886 15:04:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:14.886 15:04:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.886 15:04:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.886 15:04:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.886 15:04:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.886 15:04:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.886 15:04:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.886 15:04:30 -- paths/export.sh@5 -- # export PATH 00:17:14.886 15:04:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.886 15:04:30 -- nvmf/common.sh@47 -- # : 0 00:17:14.886 15:04:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.886 15:04:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.886 15:04:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.886 15:04:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.886 15:04:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.886 15:04:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.886 15:04:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.886 15:04:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.887 15:04:30 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:17:14.887 15:04:30 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:17:14.887 15:04:30 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:17:14.887 15:04:30 -- target/discovery.sh@15 -- # hash nvme 00:17:14.887 15:04:30 -- target/discovery.sh@20 -- # nvmftestinit 00:17:14.887 15:04:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:14.887 15:04:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.887 15:04:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:14.887 15:04:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:14.887 15:04:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:14.887 15:04:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.887 15:04:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.887 15:04:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.887 15:04:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:14.887 15:04:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:14.887 15:04:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:14.887 15:04:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:14.887 15:04:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:14.887 15:04:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:14.887 15:04:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.887 15:04:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.887 15:04:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:14.887 15:04:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:14.887 15:04:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:14.887 15:04:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:14.887 15:04:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:14.887 15:04:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.887 15:04:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:14.887 15:04:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:14.887 15:04:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:14.887 15:04:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:14.887 15:04:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:14.887 15:04:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:14.887 Cannot find device "nvmf_tgt_br" 00:17:14.887 15:04:30 -- nvmf/common.sh@155 -- # true 00:17:14.887 15:04:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:14.887 Cannot find device "nvmf_tgt_br2" 00:17:14.887 15:04:30 -- nvmf/common.sh@156 -- # true 00:17:14.887 15:04:30 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:14.887 15:04:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:15.145 Cannot find device "nvmf_tgt_br" 00:17:15.145 15:04:30 -- nvmf/common.sh@158 -- # true 00:17:15.145 15:04:30 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:15.145 Cannot find device "nvmf_tgt_br2" 00:17:15.145 15:04:30 -- nvmf/common.sh@159 -- # true 00:17:15.145 15:04:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:15.145 15:04:30 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:15.145 15:04:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:15.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:15.145 15:04:30 -- nvmf/common.sh@162 -- # true 00:17:15.145 15:04:30 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:15.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:15.145 15:04:30 -- nvmf/common.sh@163 -- # true 00:17:15.145 15:04:30 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:15.145 15:04:30 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:15.145 15:04:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:15.145 15:04:30 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:15.145 15:04:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:15.145 15:04:30 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:15.145 15:04:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:15.145 15:04:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:15.145 15:04:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:15.145 15:04:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:15.145 15:04:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:15.403 15:04:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:15.403 15:04:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:15.403 15:04:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:15.403 15:04:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:15.404 15:04:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:15.404 15:04:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:15.404 15:04:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:15.404 15:04:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:15.404 15:04:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:15.404 15:04:30 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:15.404 15:04:30 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:15.404 15:04:30 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:15.404 15:04:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:15.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:17:15.404 00:17:15.404 --- 10.0.0.2 ping statistics --- 00:17:15.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.404 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:15.404 15:04:30 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:15.404 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:15.404 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:17:15.404 00:17:15.404 --- 10.0.0.3 ping statistics --- 00:17:15.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.404 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:15.404 15:04:30 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:15.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:17:15.404 00:17:15.404 --- 10.0.0.1 ping statistics --- 00:17:15.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.404 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:15.404 15:04:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.404 15:04:30 -- nvmf/common.sh@422 -- # return 0 00:17:15.404 15:04:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:15.404 15:04:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.404 15:04:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:15.404 15:04:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:15.404 15:04:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.404 15:04:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:15.404 15:04:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:15.404 15:04:31 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:17:15.404 15:04:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:15.404 15:04:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:15.404 15:04:31 -- common/autotest_common.sh@10 -- # set +x 00:17:15.404 15:04:31 -- nvmf/common.sh@470 -- # nvmfpid=66009 00:17:15.404 15:04:31 -- nvmf/common.sh@471 -- # waitforlisten 66009 00:17:15.404 15:04:31 -- common/autotest_common.sh@817 -- # '[' -z 66009 ']' 00:17:15.404 15:04:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.404 15:04:31 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:15.404 15:04:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:15.404 15:04:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.404 15:04:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:15.404 15:04:31 -- common/autotest_common.sh@10 -- # set +x 00:17:15.404 [2024-04-18 15:04:31.073202] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:17:15.404 [2024-04-18 15:04:31.073273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.662 [2024-04-18 15:04:31.214171] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:15.662 [2024-04-18 15:04:31.304419] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.662 [2024-04-18 15:04:31.304908] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.662 [2024-04-18 15:04:31.304984] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.662 [2024-04-18 15:04:31.304995] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.662 [2024-04-18 15:04:31.305003] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.662 [2024-04-18 15:04:31.305143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.662 [2024-04-18 15:04:31.305408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.662 [2024-04-18 15:04:31.305733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.662 [2024-04-18 15:04:31.305734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.597 15:04:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:16.597 15:04:31 -- common/autotest_common.sh@850 -- # return 0 00:17:16.597 15:04:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:16.597 15:04:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:16.597 15:04:31 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 15:04:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.597 15:04:32 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 [2024-04-18 15:04:32.046775] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@26 -- # seq 1 4 00:17:16.597 15:04:32 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:16.597 15:04:32 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 Null1 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 [2024-04-18 15:04:32.120569] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:16.597 15:04:32 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 Null2 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:16.597 15:04:32 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 Null3 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:16.597 15:04:32 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 Null4 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:17:16.597 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.597 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.597 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.597 15:04:32 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -a 10.0.0.2 -s 4420 00:17:16.856 00:17:16.856 Discovery Log Number of Records 6, Generation counter 6 00:17:16.856 =====Discovery Log Entry 0====== 00:17:16.856 trtype: tcp 00:17:16.856 adrfam: ipv4 00:17:16.856 subtype: current discovery subsystem 00:17:16.856 treq: not required 00:17:16.856 portid: 0 00:17:16.856 trsvcid: 4420 00:17:16.856 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:16.856 traddr: 10.0.0.2 00:17:16.856 eflags: explicit discovery connections, duplicate discovery information 00:17:16.856 sectype: none 00:17:16.856 =====Discovery Log Entry 1====== 00:17:16.856 trtype: tcp 00:17:16.856 adrfam: ipv4 00:17:16.856 subtype: nvme subsystem 00:17:16.856 treq: not required 00:17:16.856 portid: 0 00:17:16.856 trsvcid: 4420 00:17:16.856 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:16.856 traddr: 10.0.0.2 00:17:16.856 eflags: none 00:17:16.856 sectype: none 00:17:16.856 =====Discovery Log Entry 2====== 00:17:16.856 trtype: tcp 00:17:16.856 adrfam: ipv4 00:17:16.856 subtype: nvme subsystem 00:17:16.856 treq: not required 00:17:16.856 portid: 0 00:17:16.856 trsvcid: 4420 00:17:16.856 subnqn: nqn.2016-06.io.spdk:cnode2 00:17:16.856 traddr: 10.0.0.2 00:17:16.856 eflags: none 00:17:16.856 sectype: none 00:17:16.856 =====Discovery Log Entry 3====== 00:17:16.856 trtype: tcp 00:17:16.856 adrfam: ipv4 00:17:16.856 subtype: nvme subsystem 00:17:16.856 treq: not required 00:17:16.856 portid: 0 00:17:16.856 trsvcid: 4420 00:17:16.856 subnqn: nqn.2016-06.io.spdk:cnode3 00:17:16.856 traddr: 10.0.0.2 00:17:16.856 eflags: none 00:17:16.856 sectype: none 00:17:16.856 =====Discovery Log Entry 4====== 00:17:16.856 trtype: tcp 00:17:16.856 adrfam: ipv4 00:17:16.856 subtype: nvme subsystem 00:17:16.856 treq: not required 00:17:16.856 portid: 0 00:17:16.856 trsvcid: 4420 00:17:16.856 subnqn: nqn.2016-06.io.spdk:cnode4 00:17:16.856 traddr: 10.0.0.2 00:17:16.856 eflags: none 00:17:16.856 sectype: none 00:17:16.856 =====Discovery Log Entry 5====== 00:17:16.856 trtype: tcp 00:17:16.856 adrfam: ipv4 00:17:16.856 subtype: discovery subsystem referral 00:17:16.856 treq: not required 00:17:16.856 portid: 0 00:17:16.856 trsvcid: 4430 00:17:16.856 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:16.856 traddr: 10.0.0.2 00:17:16.856 eflags: none 00:17:16.856 sectype: none 00:17:16.856 15:04:32 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:17:16.856 Perform nvmf subsystem discovery via RPC 00:17:16.856 15:04:32 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:17:16.856 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.856 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.856 [2024-04-18 15:04:32.360109] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:16.856 [ 00:17:16.856 { 00:17:16.856 "allow_any_host": true, 00:17:16.856 "hosts": [], 00:17:16.856 "listen_addresses": [ 00:17:16.856 { 00:17:16.856 "adrfam": "IPv4", 00:17:16.856 "traddr": "10.0.0.2", 00:17:16.856 "transport": "TCP", 00:17:16.856 "trsvcid": "4420", 00:17:16.856 "trtype": "TCP" 00:17:16.856 } 00:17:16.856 ], 00:17:16.856 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:16.856 "subtype": "Discovery" 00:17:16.856 }, 00:17:16.856 { 00:17:16.856 "allow_any_host": true, 00:17:16.856 "hosts": [], 00:17:16.856 "listen_addresses": [ 00:17:16.856 { 00:17:16.856 "adrfam": "IPv4", 00:17:16.856 "traddr": "10.0.0.2", 00:17:16.856 "transport": "TCP", 00:17:16.856 "trsvcid": "4420", 00:17:16.856 "trtype": "TCP" 00:17:16.856 } 00:17:16.856 ], 00:17:16.856 "max_cntlid": 65519, 00:17:16.856 "max_namespaces": 32, 00:17:16.856 "min_cntlid": 1, 00:17:16.856 "model_number": "SPDK bdev Controller", 00:17:16.856 "namespaces": [ 00:17:16.856 { 00:17:16.856 "bdev_name": "Null1", 00:17:16.856 "name": "Null1", 00:17:16.856 "nguid": "91437E67A2E54C2494E946DCDA8D773C", 00:17:16.856 "nsid": 1, 00:17:16.856 "uuid": "91437e67-a2e5-4c24-94e9-46dcda8d773c" 00:17:16.856 } 00:17:16.856 ], 00:17:16.856 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.856 "serial_number": "SPDK00000000000001", 00:17:16.856 "subtype": "NVMe" 00:17:16.856 }, 00:17:16.856 { 00:17:16.856 "allow_any_host": true, 00:17:16.856 "hosts": [], 00:17:16.856 "listen_addresses": [ 00:17:16.856 { 00:17:16.856 "adrfam": "IPv4", 00:17:16.856 "traddr": "10.0.0.2", 00:17:16.856 "transport": "TCP", 00:17:16.856 "trsvcid": "4420", 00:17:16.856 "trtype": "TCP" 00:17:16.856 } 00:17:16.856 ], 00:17:16.856 "max_cntlid": 65519, 00:17:16.856 "max_namespaces": 32, 00:17:16.856 "min_cntlid": 1, 00:17:16.856 "model_number": "SPDK bdev Controller", 00:17:16.856 "namespaces": [ 00:17:16.856 { 00:17:16.856 "bdev_name": "Null2", 00:17:16.856 "name": "Null2", 00:17:16.856 "nguid": "FCD5BD5A508142559122D755CA932A73", 00:17:16.856 "nsid": 1, 00:17:16.856 "uuid": "fcd5bd5a-5081-4255-9122-d755ca932a73" 00:17:16.856 } 00:17:16.856 ], 00:17:16.856 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:16.856 "serial_number": "SPDK00000000000002", 00:17:16.856 "subtype": "NVMe" 00:17:16.856 }, 00:17:16.856 { 00:17:16.856 "allow_any_host": true, 00:17:16.856 "hosts": [], 00:17:16.856 "listen_addresses": [ 00:17:16.856 { 00:17:16.856 "adrfam": "IPv4", 00:17:16.856 "traddr": "10.0.0.2", 00:17:16.856 "transport": "TCP", 00:17:16.856 "trsvcid": "4420", 00:17:16.856 "trtype": "TCP" 00:17:16.856 } 00:17:16.856 ], 00:17:16.856 "max_cntlid": 65519, 00:17:16.856 "max_namespaces": 32, 00:17:16.856 "min_cntlid": 1, 00:17:16.856 "model_number": "SPDK bdev Controller", 00:17:16.856 "namespaces": [ 00:17:16.856 { 00:17:16.856 "bdev_name": "Null3", 00:17:16.856 "name": "Null3", 00:17:16.856 "nguid": "58D60E81A3144E10BE83031B9A497F7C", 00:17:16.856 "nsid": 1, 00:17:16.857 "uuid": "58d60e81-a314-4e10-be83-031b9a497f7c" 00:17:16.857 } 00:17:16.857 ], 00:17:16.857 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:17:16.857 "serial_number": "SPDK00000000000003", 00:17:16.857 "subtype": "NVMe" 00:17:16.857 }, 00:17:16.857 { 00:17:16.857 "allow_any_host": true, 00:17:16.857 "hosts": [], 00:17:16.857 "listen_addresses": [ 00:17:16.857 { 00:17:16.857 "adrfam": "IPv4", 00:17:16.857 "traddr": "10.0.0.2", 00:17:16.857 "transport": "TCP", 00:17:16.857 "trsvcid": "4420", 00:17:16.857 "trtype": "TCP" 00:17:16.857 } 00:17:16.857 ], 00:17:16.857 "max_cntlid": 65519, 00:17:16.857 "max_namespaces": 32, 00:17:16.857 "min_cntlid": 1, 00:17:16.857 "model_number": "SPDK bdev Controller", 00:17:16.857 "namespaces": [ 00:17:16.857 { 00:17:16.857 "bdev_name": "Null4", 00:17:16.857 "name": "Null4", 00:17:16.857 "nguid": "C68C692FFC134A2E8ACB899451D968C7", 00:17:16.857 "nsid": 1, 00:17:16.857 "uuid": "c68c692f-fc13-4a2e-8acb-899451d968c7" 00:17:16.857 } 00:17:16.857 ], 00:17:16.857 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:17:16.857 "serial_number": "SPDK00000000000004", 00:17:16.857 "subtype": "NVMe" 00:17:16.857 } 00:17:16.857 ] 00:17:16.857 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.857 15:04:32 -- target/discovery.sh@42 -- # seq 1 4 00:17:16.857 15:04:32 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:16.857 15:04:32 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:16.857 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.857 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.857 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.857 15:04:32 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:17:16.857 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.857 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.857 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.857 15:04:32 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:16.857 15:04:32 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:16.857 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.857 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.857 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.857 15:04:32 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:17:16.857 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.857 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.857 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.857 15:04:32 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:16.857 15:04:32 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:16.857 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.857 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.857 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.857 15:04:32 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:17:16.857 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.857 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.857 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.857 15:04:32 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:16.857 15:04:32 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:16.857 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.857 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.857 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.857 15:04:32 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:17:16.857 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.857 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.857 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.857 15:04:32 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:17:16.857 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.857 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.857 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.857 15:04:32 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:17:16.857 15:04:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.857 15:04:32 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:17:16.857 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:16.857 15:04:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.857 15:04:32 -- target/discovery.sh@49 -- # check_bdevs= 00:17:16.857 15:04:32 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:17:16.857 15:04:32 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:17:16.857 15:04:32 -- target/discovery.sh@57 -- # nvmftestfini 00:17:16.857 15:04:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:16.857 15:04:32 -- nvmf/common.sh@117 -- # sync 00:17:17.126 15:04:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:17.126 15:04:32 -- nvmf/common.sh@120 -- # set +e 00:17:17.126 15:04:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:17.126 15:04:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:17.126 rmmod nvme_tcp 00:17:17.126 rmmod nvme_fabrics 00:17:17.126 rmmod nvme_keyring 00:17:17.126 15:04:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:17.126 15:04:32 -- nvmf/common.sh@124 -- # set -e 00:17:17.126 15:04:32 -- nvmf/common.sh@125 -- # return 0 00:17:17.126 15:04:32 -- nvmf/common.sh@478 -- # '[' -n 66009 ']' 00:17:17.126 15:04:32 -- nvmf/common.sh@479 -- # killprocess 66009 00:17:17.126 15:04:32 -- common/autotest_common.sh@936 -- # '[' -z 66009 ']' 00:17:17.126 15:04:32 -- common/autotest_common.sh@940 -- # kill -0 66009 00:17:17.126 15:04:32 -- common/autotest_common.sh@941 -- # uname 00:17:17.126 15:04:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:17.126 15:04:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66009 00:17:17.126 killing process with pid 66009 00:17:17.126 15:04:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:17.126 15:04:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:17.126 15:04:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66009' 00:17:17.126 15:04:32 -- common/autotest_common.sh@955 -- # kill 66009 00:17:17.126 [2024-04-18 15:04:32.690556] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:17.126 15:04:32 -- common/autotest_common.sh@960 -- # wait 66009 00:17:17.384 15:04:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:17.384 15:04:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:17.384 15:04:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:17.384 15:04:32 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.384 15:04:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:17.384 15:04:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.385 15:04:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.385 15:04:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.385 15:04:32 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:17.385 00:17:17.385 real 0m2.620s 00:17:17.385 user 0m6.625s 00:17:17.385 sys 0m0.790s 00:17:17.385 15:04:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:17.385 15:04:32 -- common/autotest_common.sh@10 -- # set +x 00:17:17.385 ************************************ 00:17:17.385 END TEST nvmf_discovery 00:17:17.385 ************************************ 00:17:17.385 15:04:33 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:17:17.385 15:04:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:17.385 15:04:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:17.385 15:04:33 -- common/autotest_common.sh@10 -- # set +x 00:17:17.644 ************************************ 00:17:17.644 START TEST nvmf_referrals 00:17:17.644 ************************************ 00:17:17.644 15:04:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:17:17.644 * Looking for test storage... 00:17:17.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:17.644 15:04:33 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:17.644 15:04:33 -- nvmf/common.sh@7 -- # uname -s 00:17:17.644 15:04:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.644 15:04:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.644 15:04:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.644 15:04:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.644 15:04:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.644 15:04:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.644 15:04:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.644 15:04:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.644 15:04:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.644 15:04:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.644 15:04:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:17:17.644 15:04:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:17:17.644 15:04:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.644 15:04:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.644 15:04:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:17.644 15:04:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.644 15:04:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:17.644 15:04:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.644 15:04:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.644 15:04:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.644 15:04:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.644 15:04:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.644 15:04:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.644 15:04:33 -- paths/export.sh@5 -- # export PATH 00:17:17.644 15:04:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.644 15:04:33 -- nvmf/common.sh@47 -- # : 0 00:17:17.644 15:04:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:17.644 15:04:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:17.644 15:04:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.644 15:04:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.644 15:04:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.644 15:04:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:17.644 15:04:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:17.644 15:04:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:17.644 15:04:33 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:17:17.644 15:04:33 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:17:17.644 15:04:33 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:17:17.644 15:04:33 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:17:17.644 15:04:33 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:17.644 15:04:33 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:17.644 15:04:33 -- target/referrals.sh@37 -- # nvmftestinit 00:17:17.644 15:04:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:17.644 15:04:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.644 15:04:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:17.644 15:04:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:17.644 15:04:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:17.644 15:04:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.644 15:04:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.644 15:04:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.644 15:04:33 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:17.644 15:04:33 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:17.644 15:04:33 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:17.644 15:04:33 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:17.644 15:04:33 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:17.644 15:04:33 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:17.644 15:04:33 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.644 15:04:33 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.644 15:04:33 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:17.644 15:04:33 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:17.644 15:04:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:17.644 15:04:33 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:17.644 15:04:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:17.644 15:04:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.644 15:04:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:17.644 15:04:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:17.644 15:04:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:17.644 15:04:33 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:17.644 15:04:33 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:17.644 15:04:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:17.644 Cannot find device "nvmf_tgt_br" 00:17:17.644 15:04:33 -- nvmf/common.sh@155 -- # true 00:17:17.644 15:04:33 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:17.918 Cannot find device "nvmf_tgt_br2" 00:17:17.918 15:04:33 -- nvmf/common.sh@156 -- # true 00:17:17.918 15:04:33 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:17.918 15:04:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:17.918 Cannot find device "nvmf_tgt_br" 00:17:17.918 15:04:33 -- nvmf/common.sh@158 -- # true 00:17:17.918 15:04:33 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:17.918 Cannot find device "nvmf_tgt_br2" 00:17:17.918 15:04:33 -- nvmf/common.sh@159 -- # true 00:17:17.918 15:04:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:17.918 15:04:33 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:17.918 15:04:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:17.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.918 15:04:33 -- nvmf/common.sh@162 -- # true 00:17:17.918 15:04:33 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:17.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.918 15:04:33 -- nvmf/common.sh@163 -- # true 00:17:17.918 15:04:33 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:17.918 15:04:33 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:17.918 15:04:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:17.918 15:04:33 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:17.918 15:04:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:17.918 15:04:33 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:17.919 15:04:33 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:17.919 15:04:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:17.919 15:04:33 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:17.919 15:04:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:17.919 15:04:33 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:17.919 15:04:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:17.919 15:04:33 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:17.919 15:04:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:17.919 15:04:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:17.919 15:04:33 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:17.919 15:04:33 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:17.919 15:04:33 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:17.919 15:04:33 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:17.919 15:04:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:18.183 15:04:33 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:18.183 15:04:33 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:18.183 15:04:33 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:18.183 15:04:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:18.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:17:18.183 00:17:18.183 --- 10.0.0.2 ping statistics --- 00:17:18.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.183 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:18.183 15:04:33 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:18.183 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:18.183 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:17:18.183 00:17:18.183 --- 10.0.0.3 ping statistics --- 00:17:18.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.183 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:17:18.183 15:04:33 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:18.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:17:18.183 00:17:18.184 --- 10.0.0.1 ping statistics --- 00:17:18.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.184 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:18.184 15:04:33 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.184 15:04:33 -- nvmf/common.sh@422 -- # return 0 00:17:18.184 15:04:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:18.184 15:04:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.184 15:04:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:18.184 15:04:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:18.184 15:04:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.184 15:04:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:18.184 15:04:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:18.184 15:04:33 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:17:18.184 15:04:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:18.184 15:04:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:18.184 15:04:33 -- common/autotest_common.sh@10 -- # set +x 00:17:18.184 15:04:33 -- nvmf/common.sh@470 -- # nvmfpid=66245 00:17:18.184 15:04:33 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:18.184 15:04:33 -- nvmf/common.sh@471 -- # waitforlisten 66245 00:17:18.184 15:04:33 -- common/autotest_common.sh@817 -- # '[' -z 66245 ']' 00:17:18.184 15:04:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.184 15:04:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:18.184 15:04:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.184 15:04:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:18.184 15:04:33 -- common/autotest_common.sh@10 -- # set +x 00:17:18.184 [2024-04-18 15:04:33.778124] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:17:18.184 [2024-04-18 15:04:33.778194] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.443 [2024-04-18 15:04:33.922889] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:18.443 [2024-04-18 15:04:34.004222] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.443 [2024-04-18 15:04:34.004284] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.443 [2024-04-18 15:04:34.004294] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.443 [2024-04-18 15:04:34.004303] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.443 [2024-04-18 15:04:34.004310] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.443 [2024-04-18 15:04:34.004840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.443 [2024-04-18 15:04:34.004942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.443 [2024-04-18 15:04:34.005214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.443 [2024-04-18 15:04:34.005626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:19.010 15:04:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:19.010 15:04:34 -- common/autotest_common.sh@850 -- # return 0 00:17:19.010 15:04:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:19.010 15:04:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:19.010 15:04:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.010 15:04:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.010 15:04:34 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:19.010 15:04:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.010 15:04:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.010 [2024-04-18 15:04:34.698876] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.268 15:04:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.268 15:04:34 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:17:19.268 15:04:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.269 15:04:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.269 [2024-04-18 15:04:34.722353] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:19.269 15:04:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.269 15:04:34 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:17:19.269 15:04:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.269 15:04:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.269 15:04:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.269 15:04:34 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:17:19.269 15:04:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.269 15:04:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.269 15:04:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.269 15:04:34 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:17:19.269 15:04:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.269 15:04:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.269 15:04:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.269 15:04:34 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:19.269 15:04:34 -- target/referrals.sh@48 -- # jq length 00:17:19.269 15:04:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.269 15:04:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.269 15:04:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.269 15:04:34 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:17:19.269 15:04:34 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:17:19.269 15:04:34 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:19.269 15:04:34 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:19.269 15:04:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.269 15:04:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.269 15:04:34 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:19.269 15:04:34 -- target/referrals.sh@21 -- # sort 00:17:19.269 15:04:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.269 15:04:34 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:19.269 15:04:34 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:19.269 15:04:34 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:17:19.269 15:04:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:19.269 15:04:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:19.269 15:04:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:19.269 15:04:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:19.269 15:04:34 -- target/referrals.sh@26 -- # sort 00:17:19.527 15:04:34 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:19.527 15:04:34 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:19.527 15:04:34 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:17:19.527 15:04:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.527 15:04:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.527 15:04:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.527 15:04:34 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:17:19.527 15:04:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.527 15:04:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.527 15:04:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.527 15:04:34 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:17:19.527 15:04:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.527 15:04:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.527 15:04:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.527 15:04:35 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:19.527 15:04:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.527 15:04:35 -- target/referrals.sh@56 -- # jq length 00:17:19.527 15:04:35 -- common/autotest_common.sh@10 -- # set +x 00:17:19.527 15:04:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.527 15:04:35 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:17:19.527 15:04:35 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:17:19.527 15:04:35 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:19.527 15:04:35 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:19.527 15:04:35 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:19.527 15:04:35 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:19.527 15:04:35 -- target/referrals.sh@26 -- # sort 00:17:19.527 15:04:35 -- target/referrals.sh@26 -- # echo 00:17:19.527 15:04:35 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:17:19.527 15:04:35 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:17:19.527 15:04:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.527 15:04:35 -- common/autotest_common.sh@10 -- # set +x 00:17:19.527 15:04:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.527 15:04:35 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:19.527 15:04:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.527 15:04:35 -- common/autotest_common.sh@10 -- # set +x 00:17:19.527 15:04:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.527 15:04:35 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:17:19.527 15:04:35 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:19.527 15:04:35 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:19.527 15:04:35 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:19.527 15:04:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.527 15:04:35 -- common/autotest_common.sh@10 -- # set +x 00:17:19.527 15:04:35 -- target/referrals.sh@21 -- # sort 00:17:19.527 15:04:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.527 15:04:35 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:17:19.527 15:04:35 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:19.527 15:04:35 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:17:19.527 15:04:35 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:19.527 15:04:35 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:19.527 15:04:35 -- target/referrals.sh@26 -- # sort 00:17:19.527 15:04:35 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:19.527 15:04:35 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:19.785 15:04:35 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:17:19.785 15:04:35 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:19.785 15:04:35 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:17:19.785 15:04:35 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:19.785 15:04:35 -- target/referrals.sh@67 -- # jq -r .subnqn 00:17:19.786 15:04:35 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:19.786 15:04:35 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:19.786 15:04:35 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:17:19.786 15:04:35 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:17:19.786 15:04:35 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:19.786 15:04:35 -- target/referrals.sh@68 -- # jq -r .subnqn 00:17:19.786 15:04:35 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:19.786 15:04:35 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:19.786 15:04:35 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:19.786 15:04:35 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:19.786 15:04:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.786 15:04:35 -- common/autotest_common.sh@10 -- # set +x 00:17:19.786 15:04:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.786 15:04:35 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:17:19.786 15:04:35 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:20.044 15:04:35 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:20.044 15:04:35 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:20.044 15:04:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.044 15:04:35 -- common/autotest_common.sh@10 -- # set +x 00:17:20.044 15:04:35 -- target/referrals.sh@21 -- # sort 00:17:20.044 15:04:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.044 15:04:35 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:17:20.044 15:04:35 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:20.044 15:04:35 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:17:20.044 15:04:35 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:20.044 15:04:35 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:20.044 15:04:35 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:20.044 15:04:35 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:20.044 15:04:35 -- target/referrals.sh@26 -- # sort 00:17:20.044 15:04:35 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:17:20.044 15:04:35 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:20.044 15:04:35 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:17:20.044 15:04:35 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:20.044 15:04:35 -- target/referrals.sh@75 -- # jq -r .subnqn 00:17:20.044 15:04:35 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:20.044 15:04:35 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:20.044 15:04:35 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:17:20.044 15:04:35 -- target/referrals.sh@76 -- # jq -r .subnqn 00:17:20.044 15:04:35 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:17:20.045 15:04:35 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:20.045 15:04:35 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:20.045 15:04:35 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:20.303 15:04:35 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:20.303 15:04:35 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:17:20.303 15:04:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.303 15:04:35 -- common/autotest_common.sh@10 -- # set +x 00:17:20.303 15:04:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.303 15:04:35 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:20.303 15:04:35 -- target/referrals.sh@82 -- # jq length 00:17:20.303 15:04:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.303 15:04:35 -- common/autotest_common.sh@10 -- # set +x 00:17:20.303 15:04:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.303 15:04:35 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:17:20.303 15:04:35 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:17:20.303 15:04:35 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:20.303 15:04:35 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:20.303 15:04:35 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:20.303 15:04:35 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:20.303 15:04:35 -- target/referrals.sh@26 -- # sort 00:17:20.303 15:04:35 -- target/referrals.sh@26 -- # echo 00:17:20.303 15:04:35 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:17:20.303 15:04:35 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:17:20.303 15:04:35 -- target/referrals.sh@86 -- # nvmftestfini 00:17:20.303 15:04:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:20.303 15:04:35 -- nvmf/common.sh@117 -- # sync 00:17:20.303 15:04:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:20.303 15:04:35 -- nvmf/common.sh@120 -- # set +e 00:17:20.303 15:04:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:20.303 15:04:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:20.303 rmmod nvme_tcp 00:17:20.303 rmmod nvme_fabrics 00:17:20.561 rmmod nvme_keyring 00:17:20.561 15:04:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:20.561 15:04:36 -- nvmf/common.sh@124 -- # set -e 00:17:20.561 15:04:36 -- nvmf/common.sh@125 -- # return 0 00:17:20.561 15:04:36 -- nvmf/common.sh@478 -- # '[' -n 66245 ']' 00:17:20.561 15:04:36 -- nvmf/common.sh@479 -- # killprocess 66245 00:17:20.561 15:04:36 -- common/autotest_common.sh@936 -- # '[' -z 66245 ']' 00:17:20.561 15:04:36 -- common/autotest_common.sh@940 -- # kill -0 66245 00:17:20.561 15:04:36 -- common/autotest_common.sh@941 -- # uname 00:17:20.561 15:04:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:20.561 15:04:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66245 00:17:20.561 killing process with pid 66245 00:17:20.561 15:04:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:20.561 15:04:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:20.561 15:04:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66245' 00:17:20.561 15:04:36 -- common/autotest_common.sh@955 -- # kill 66245 00:17:20.561 15:04:36 -- common/autotest_common.sh@960 -- # wait 66245 00:17:20.820 15:04:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:20.820 15:04:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:20.820 15:04:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:20.820 15:04:36 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.820 15:04:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.820 15:04:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.820 15:04:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.820 15:04:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.820 15:04:36 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:20.820 00:17:20.820 real 0m3.244s 00:17:20.820 user 0m9.822s 00:17:20.820 sys 0m1.110s 00:17:20.820 15:04:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:20.820 ************************************ 00:17:20.820 END TEST nvmf_referrals 00:17:20.820 ************************************ 00:17:20.820 15:04:36 -- common/autotest_common.sh@10 -- # set +x 00:17:20.820 15:04:36 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:20.820 15:04:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:20.820 15:04:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:20.820 15:04:36 -- common/autotest_common.sh@10 -- # set +x 00:17:21.079 ************************************ 00:17:21.079 START TEST nvmf_connect_disconnect 00:17:21.079 ************************************ 00:17:21.079 15:04:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:21.079 * Looking for test storage... 00:17:21.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:21.079 15:04:36 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:21.079 15:04:36 -- nvmf/common.sh@7 -- # uname -s 00:17:21.079 15:04:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.079 15:04:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.079 15:04:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.079 15:04:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.079 15:04:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.080 15:04:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.080 15:04:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.080 15:04:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.080 15:04:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.080 15:04:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.080 15:04:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:17:21.080 15:04:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:17:21.080 15:04:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.080 15:04:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.080 15:04:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:21.080 15:04:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.080 15:04:36 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:21.080 15:04:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.080 15:04:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.080 15:04:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.080 15:04:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.080 15:04:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.080 15:04:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.080 15:04:36 -- paths/export.sh@5 -- # export PATH 00:17:21.080 15:04:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.080 15:04:36 -- nvmf/common.sh@47 -- # : 0 00:17:21.080 15:04:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:21.080 15:04:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:21.080 15:04:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.080 15:04:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.080 15:04:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.080 15:04:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:21.080 15:04:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:21.080 15:04:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:21.080 15:04:36 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:21.080 15:04:36 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:21.080 15:04:36 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:17:21.080 15:04:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:21.080 15:04:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.080 15:04:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:21.080 15:04:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:21.080 15:04:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:21.080 15:04:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.080 15:04:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.080 15:04:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.080 15:04:36 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:21.080 15:04:36 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:21.080 15:04:36 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:21.080 15:04:36 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:21.080 15:04:36 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:21.080 15:04:36 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:21.080 15:04:36 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.080 15:04:36 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.080 15:04:36 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:21.080 15:04:36 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:21.080 15:04:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:21.080 15:04:36 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:21.080 15:04:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:21.080 15:04:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.080 15:04:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:21.080 15:04:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:21.080 15:04:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:21.080 15:04:36 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:21.080 15:04:36 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:21.080 15:04:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:21.080 Cannot find device "nvmf_tgt_br" 00:17:21.080 15:04:36 -- nvmf/common.sh@155 -- # true 00:17:21.080 15:04:36 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.080 Cannot find device "nvmf_tgt_br2" 00:17:21.080 15:04:36 -- nvmf/common.sh@156 -- # true 00:17:21.080 15:04:36 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:21.080 15:04:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:21.080 Cannot find device "nvmf_tgt_br" 00:17:21.080 15:04:36 -- nvmf/common.sh@158 -- # true 00:17:21.080 15:04:36 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:21.339 Cannot find device "nvmf_tgt_br2" 00:17:21.339 15:04:36 -- nvmf/common.sh@159 -- # true 00:17:21.339 15:04:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:21.339 15:04:36 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:21.339 15:04:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:21.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.339 15:04:36 -- nvmf/common.sh@162 -- # true 00:17:21.339 15:04:36 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:21.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.339 15:04:36 -- nvmf/common.sh@163 -- # true 00:17:21.339 15:04:36 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:21.339 15:04:36 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:21.339 15:04:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:21.339 15:04:36 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:21.339 15:04:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:21.339 15:04:36 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:21.339 15:04:36 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:21.339 15:04:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:21.339 15:04:36 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:21.339 15:04:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:21.339 15:04:37 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:21.339 15:04:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:21.339 15:04:37 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:21.339 15:04:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:21.659 15:04:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:21.659 15:04:37 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:21.659 15:04:37 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:21.659 15:04:37 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:21.659 15:04:37 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:21.659 15:04:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:21.659 15:04:37 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:21.659 15:04:37 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:21.659 15:04:37 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:21.659 15:04:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:21.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:17:21.659 00:17:21.659 --- 10.0.0.2 ping statistics --- 00:17:21.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.659 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:17:21.659 15:04:37 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:21.659 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:21.659 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:17:21.659 00:17:21.659 --- 10.0.0.3 ping statistics --- 00:17:21.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.659 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:21.659 15:04:37 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:21.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:21.659 00:17:21.659 --- 10.0.0.1 ping statistics --- 00:17:21.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.659 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:21.659 15:04:37 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.659 15:04:37 -- nvmf/common.sh@422 -- # return 0 00:17:21.659 15:04:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:21.659 15:04:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.659 15:04:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:21.659 15:04:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:21.659 15:04:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.659 15:04:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:21.659 15:04:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:21.659 15:04:37 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:17:21.659 15:04:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:21.659 15:04:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:21.659 15:04:37 -- common/autotest_common.sh@10 -- # set +x 00:17:21.659 15:04:37 -- nvmf/common.sh@470 -- # nvmfpid=66559 00:17:21.659 15:04:37 -- nvmf/common.sh@471 -- # waitforlisten 66559 00:17:21.659 15:04:37 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:21.659 15:04:37 -- common/autotest_common.sh@817 -- # '[' -z 66559 ']' 00:17:21.659 15:04:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.659 15:04:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:21.659 15:04:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.659 15:04:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:21.659 15:04:37 -- common/autotest_common.sh@10 -- # set +x 00:17:21.659 [2024-04-18 15:04:37.233296] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:17:21.659 [2024-04-18 15:04:37.233377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.917 [2024-04-18 15:04:37.368622] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:21.917 [2024-04-18 15:04:37.450456] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.917 [2024-04-18 15:04:37.450512] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.917 [2024-04-18 15:04:37.450522] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.917 [2024-04-18 15:04:37.450531] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.917 [2024-04-18 15:04:37.450564] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.917 [2024-04-18 15:04:37.451054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.917 [2024-04-18 15:04:37.451236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.917 [2024-04-18 15:04:37.451460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.917 [2024-04-18 15:04:37.451461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:22.485 15:04:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:22.485 15:04:38 -- common/autotest_common.sh@850 -- # return 0 00:17:22.485 15:04:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:22.485 15:04:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:22.485 15:04:38 -- common/autotest_common.sh@10 -- # set +x 00:17:22.485 15:04:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.485 15:04:38 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:17:22.485 15:04:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.485 15:04:38 -- common/autotest_common.sh@10 -- # set +x 00:17:22.485 [2024-04-18 15:04:38.155807] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.485 15:04:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.485 15:04:38 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:17:22.485 15:04:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.485 15:04:38 -- common/autotest_common.sh@10 -- # set +x 00:17:22.745 15:04:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.745 15:04:38 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:17:22.745 15:04:38 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:22.745 15:04:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.745 15:04:38 -- common/autotest_common.sh@10 -- # set +x 00:17:22.745 15:04:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.745 15:04:38 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:22.745 15:04:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.745 15:04:38 -- common/autotest_common.sh@10 -- # set +x 00:17:22.745 15:04:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.745 15:04:38 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:22.745 15:04:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.745 15:04:38 -- common/autotest_common.sh@10 -- # set +x 00:17:22.745 [2024-04-18 15:04:38.229630] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.745 15:04:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.745 15:04:38 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:17:22.745 15:04:38 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:17:22.745 15:04:38 -- target/connect_disconnect.sh@34 -- # set +x 00:17:25.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.134 15:04:49 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:34.134 15:04:49 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:34.134 15:04:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:34.134 15:04:49 -- nvmf/common.sh@117 -- # sync 00:17:34.134 15:04:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:34.134 15:04:49 -- nvmf/common.sh@120 -- # set +e 00:17:34.134 15:04:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:34.134 15:04:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:34.134 rmmod nvme_tcp 00:17:34.134 rmmod nvme_fabrics 00:17:34.134 rmmod nvme_keyring 00:17:34.134 15:04:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:34.134 15:04:49 -- nvmf/common.sh@124 -- # set -e 00:17:34.134 15:04:49 -- nvmf/common.sh@125 -- # return 0 00:17:34.134 15:04:49 -- nvmf/common.sh@478 -- # '[' -n 66559 ']' 00:17:34.134 15:04:49 -- nvmf/common.sh@479 -- # killprocess 66559 00:17:34.134 15:04:49 -- common/autotest_common.sh@936 -- # '[' -z 66559 ']' 00:17:34.134 15:04:49 -- common/autotest_common.sh@940 -- # kill -0 66559 00:17:34.134 15:04:49 -- common/autotest_common.sh@941 -- # uname 00:17:34.134 15:04:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:34.134 15:04:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66559 00:17:34.134 15:04:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:34.134 15:04:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:34.135 killing process with pid 66559 00:17:34.135 15:04:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66559' 00:17:34.135 15:04:49 -- common/autotest_common.sh@955 -- # kill 66559 00:17:34.135 15:04:49 -- common/autotest_common.sh@960 -- # wait 66559 00:17:34.392 15:04:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:34.392 15:04:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:34.392 15:04:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:34.392 15:04:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:34.392 15:04:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:34.392 15:04:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.392 15:04:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.392 15:04:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.392 15:04:50 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:34.392 ************************************ 00:17:34.392 END TEST nvmf_connect_disconnect 00:17:34.392 ************************************ 00:17:34.392 00:17:34.392 real 0m13.536s 00:17:34.392 user 0m48.452s 00:17:34.392 sys 0m2.743s 00:17:34.392 15:04:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:34.392 15:04:50 -- common/autotest_common.sh@10 -- # set +x 00:17:34.651 15:04:50 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:34.651 15:04:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:34.651 15:04:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:34.651 15:04:50 -- common/autotest_common.sh@10 -- # set +x 00:17:34.651 ************************************ 00:17:34.651 START TEST nvmf_multitarget 00:17:34.651 ************************************ 00:17:34.651 15:04:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:34.910 * Looking for test storage... 00:17:34.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:34.910 15:04:50 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:34.910 15:04:50 -- nvmf/common.sh@7 -- # uname -s 00:17:34.910 15:04:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.910 15:04:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.910 15:04:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.910 15:04:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.910 15:04:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.910 15:04:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.910 15:04:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.910 15:04:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.910 15:04:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.910 15:04:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.910 15:04:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:17:34.910 15:04:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:17:34.910 15:04:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.910 15:04:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.910 15:04:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:34.910 15:04:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.910 15:04:50 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:34.910 15:04:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.910 15:04:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.910 15:04:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.910 15:04:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.910 15:04:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.910 15:04:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.910 15:04:50 -- paths/export.sh@5 -- # export PATH 00:17:34.910 15:04:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.910 15:04:50 -- nvmf/common.sh@47 -- # : 0 00:17:34.910 15:04:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:34.910 15:04:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:34.910 15:04:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.910 15:04:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.910 15:04:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.910 15:04:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:34.910 15:04:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:34.910 15:04:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:34.910 15:04:50 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:17:34.910 15:04:50 -- target/multitarget.sh@15 -- # nvmftestinit 00:17:34.910 15:04:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:34.910 15:04:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.910 15:04:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:34.910 15:04:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:34.910 15:04:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:34.910 15:04:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.910 15:04:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.910 15:04:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.910 15:04:50 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:34.910 15:04:50 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:34.910 15:04:50 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:34.910 15:04:50 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:34.910 15:04:50 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:34.910 15:04:50 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:34.910 15:04:50 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.910 15:04:50 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.910 15:04:50 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:34.910 15:04:50 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:34.910 15:04:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:34.910 15:04:50 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:34.910 15:04:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:34.910 15:04:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.910 15:04:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:34.910 15:04:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:34.910 15:04:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:34.910 15:04:50 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:34.910 15:04:50 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:34.910 15:04:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:34.910 Cannot find device "nvmf_tgt_br" 00:17:34.910 15:04:50 -- nvmf/common.sh@155 -- # true 00:17:34.910 15:04:50 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.910 Cannot find device "nvmf_tgt_br2" 00:17:34.910 15:04:50 -- nvmf/common.sh@156 -- # true 00:17:34.910 15:04:50 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:34.910 15:04:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:34.910 Cannot find device "nvmf_tgt_br" 00:17:34.910 15:04:50 -- nvmf/common.sh@158 -- # true 00:17:34.910 15:04:50 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:34.910 Cannot find device "nvmf_tgt_br2" 00:17:34.910 15:04:50 -- nvmf/common.sh@159 -- # true 00:17:34.910 15:04:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:34.910 15:04:50 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:35.168 15:04:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.168 15:04:50 -- nvmf/common.sh@162 -- # true 00:17:35.168 15:04:50 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.168 15:04:50 -- nvmf/common.sh@163 -- # true 00:17:35.168 15:04:50 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:35.168 15:04:50 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:35.168 15:04:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:35.168 15:04:50 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:35.168 15:04:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:35.168 15:04:50 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:35.168 15:04:50 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:35.168 15:04:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:35.168 15:04:50 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:35.168 15:04:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:35.168 15:04:50 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:35.168 15:04:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:35.168 15:04:50 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:35.168 15:04:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:35.168 15:04:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:35.168 15:04:50 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.168 15:04:50 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:35.168 15:04:50 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:35.168 15:04:50 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.168 15:04:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.168 15:04:50 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.168 15:04:50 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.168 15:04:50 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.168 15:04:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:35.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:17:35.168 00:17:35.168 --- 10.0.0.2 ping statistics --- 00:17:35.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.168 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:35.168 15:04:50 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:35.168 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.168 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:17:35.168 00:17:35.168 --- 10.0.0.3 ping statistics --- 00:17:35.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.168 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:35.168 15:04:50 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:17:35.168 00:17:35.168 --- 10.0.0.1 ping statistics --- 00:17:35.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.168 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:35.168 15:04:50 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.168 15:04:50 -- nvmf/common.sh@422 -- # return 0 00:17:35.168 15:04:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:35.168 15:04:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.168 15:04:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:35.169 15:04:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:35.169 15:04:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.169 15:04:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:35.169 15:04:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:35.169 15:04:50 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:35.169 15:04:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:35.169 15:04:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:35.169 15:04:50 -- common/autotest_common.sh@10 -- # set +x 00:17:35.427 15:04:50 -- nvmf/common.sh@470 -- # nvmfpid=66970 00:17:35.427 15:04:50 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:35.427 15:04:50 -- nvmf/common.sh@471 -- # waitforlisten 66970 00:17:35.427 15:04:50 -- common/autotest_common.sh@817 -- # '[' -z 66970 ']' 00:17:35.427 15:04:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.427 15:04:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:35.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.427 15:04:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.427 15:04:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:35.427 15:04:50 -- common/autotest_common.sh@10 -- # set +x 00:17:35.427 [2024-04-18 15:04:50.919510] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:17:35.427 [2024-04-18 15:04:50.919605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.427 [2024-04-18 15:04:51.063400] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:35.685 [2024-04-18 15:04:51.157882] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.686 [2024-04-18 15:04:51.157947] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.686 [2024-04-18 15:04:51.157960] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.686 [2024-04-18 15:04:51.157974] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.686 [2024-04-18 15:04:51.157986] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.686 [2024-04-18 15:04:51.158267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.686 [2024-04-18 15:04:51.158452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.686 [2024-04-18 15:04:51.159234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.686 [2024-04-18 15:04:51.159237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.254 15:04:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:36.254 15:04:51 -- common/autotest_common.sh@850 -- # return 0 00:17:36.254 15:04:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:36.254 15:04:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:36.254 15:04:51 -- common/autotest_common.sh@10 -- # set +x 00:17:36.254 15:04:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.254 15:04:51 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:36.254 15:04:51 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:36.254 15:04:51 -- target/multitarget.sh@21 -- # jq length 00:17:36.254 15:04:51 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:36.254 15:04:51 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:36.513 "nvmf_tgt_1" 00:17:36.513 15:04:52 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:36.513 "nvmf_tgt_2" 00:17:36.513 15:04:52 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:36.513 15:04:52 -- target/multitarget.sh@28 -- # jq length 00:17:36.773 15:04:52 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:36.773 15:04:52 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:36.773 true 00:17:36.773 15:04:52 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:36.773 true 00:17:36.773 15:04:52 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:36.773 15:04:52 -- target/multitarget.sh@35 -- # jq length 00:17:37.033 15:04:52 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:37.033 15:04:52 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:37.033 15:04:52 -- target/multitarget.sh@41 -- # nvmftestfini 00:17:37.033 15:04:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:37.033 15:04:52 -- nvmf/common.sh@117 -- # sync 00:17:37.033 15:04:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:37.033 15:04:52 -- nvmf/common.sh@120 -- # set +e 00:17:37.033 15:04:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:37.033 15:04:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:37.033 rmmod nvme_tcp 00:17:37.033 rmmod nvme_fabrics 00:17:37.033 rmmod nvme_keyring 00:17:37.033 15:04:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:37.033 15:04:52 -- nvmf/common.sh@124 -- # set -e 00:17:37.033 15:04:52 -- nvmf/common.sh@125 -- # return 0 00:17:37.033 15:04:52 -- nvmf/common.sh@478 -- # '[' -n 66970 ']' 00:17:37.033 15:04:52 -- nvmf/common.sh@479 -- # killprocess 66970 00:17:37.033 15:04:52 -- common/autotest_common.sh@936 -- # '[' -z 66970 ']' 00:17:37.033 15:04:52 -- common/autotest_common.sh@940 -- # kill -0 66970 00:17:37.033 15:04:52 -- common/autotest_common.sh@941 -- # uname 00:17:37.033 15:04:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:37.033 15:04:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66970 00:17:37.033 15:04:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:37.033 killing process with pid 66970 00:17:37.033 15:04:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:37.033 15:04:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66970' 00:17:37.033 15:04:52 -- common/autotest_common.sh@955 -- # kill 66970 00:17:37.033 15:04:52 -- common/autotest_common.sh@960 -- # wait 66970 00:17:37.292 15:04:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:37.292 15:04:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:37.292 15:04:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:37.292 15:04:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:37.292 15:04:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:37.292 15:04:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.292 15:04:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.292 15:04:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.292 15:04:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:37.551 00:17:37.551 real 0m2.797s 00:17:37.551 user 0m7.947s 00:17:37.551 sys 0m0.850s 00:17:37.551 15:04:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:37.551 15:04:53 -- common/autotest_common.sh@10 -- # set +x 00:17:37.551 ************************************ 00:17:37.551 END TEST nvmf_multitarget 00:17:37.551 ************************************ 00:17:37.551 15:04:53 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:37.551 15:04:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:37.551 15:04:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:37.551 15:04:53 -- common/autotest_common.sh@10 -- # set +x 00:17:37.551 ************************************ 00:17:37.551 START TEST nvmf_rpc 00:17:37.551 ************************************ 00:17:37.551 15:04:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:37.810 * Looking for test storage... 00:17:37.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:37.810 15:04:53 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:37.810 15:04:53 -- nvmf/common.sh@7 -- # uname -s 00:17:37.810 15:04:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.810 15:04:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.810 15:04:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.810 15:04:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.810 15:04:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.810 15:04:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.810 15:04:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.810 15:04:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.810 15:04:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.810 15:04:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.810 15:04:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:17:37.810 15:04:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:17:37.810 15:04:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.810 15:04:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.810 15:04:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:37.810 15:04:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.810 15:04:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:37.810 15:04:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.810 15:04:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.810 15:04:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.810 15:04:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.810 15:04:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.811 15:04:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.811 15:04:53 -- paths/export.sh@5 -- # export PATH 00:17:37.811 15:04:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.811 15:04:53 -- nvmf/common.sh@47 -- # : 0 00:17:37.811 15:04:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:37.811 15:04:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:37.811 15:04:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.811 15:04:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.811 15:04:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.811 15:04:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:37.811 15:04:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:37.811 15:04:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:37.811 15:04:53 -- target/rpc.sh@11 -- # loops=5 00:17:37.811 15:04:53 -- target/rpc.sh@23 -- # nvmftestinit 00:17:37.811 15:04:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:37.811 15:04:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.811 15:04:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:37.811 15:04:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:37.811 15:04:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:37.811 15:04:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.811 15:04:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.811 15:04:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.811 15:04:53 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:37.811 15:04:53 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:37.811 15:04:53 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:37.811 15:04:53 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:37.811 15:04:53 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:37.811 15:04:53 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:37.811 15:04:53 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.811 15:04:53 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.811 15:04:53 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:37.811 15:04:53 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:37.811 15:04:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:37.811 15:04:53 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:37.811 15:04:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:37.811 15:04:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.811 15:04:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:37.811 15:04:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:37.811 15:04:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:37.811 15:04:53 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:37.811 15:04:53 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:37.811 15:04:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:37.811 Cannot find device "nvmf_tgt_br" 00:17:37.811 15:04:53 -- nvmf/common.sh@155 -- # true 00:17:37.811 15:04:53 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.811 Cannot find device "nvmf_tgt_br2" 00:17:37.811 15:04:53 -- nvmf/common.sh@156 -- # true 00:17:37.811 15:04:53 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:37.811 15:04:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:37.811 Cannot find device "nvmf_tgt_br" 00:17:37.811 15:04:53 -- nvmf/common.sh@158 -- # true 00:17:37.811 15:04:53 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:37.811 Cannot find device "nvmf_tgt_br2" 00:17:37.811 15:04:53 -- nvmf/common.sh@159 -- # true 00:17:37.811 15:04:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:37.811 15:04:53 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:38.070 15:04:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:38.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.070 15:04:53 -- nvmf/common.sh@162 -- # true 00:17:38.070 15:04:53 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.070 15:04:53 -- nvmf/common.sh@163 -- # true 00:17:38.070 15:04:53 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:38.070 15:04:53 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:38.070 15:04:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:38.070 15:04:53 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:38.070 15:04:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:38.070 15:04:53 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:38.070 15:04:53 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:38.070 15:04:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:38.070 15:04:53 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:38.070 15:04:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:38.070 15:04:53 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:38.070 15:04:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:38.070 15:04:53 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:38.070 15:04:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:38.070 15:04:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:38.070 15:04:53 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:38.070 15:04:53 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:38.070 15:04:53 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:38.070 15:04:53 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:38.070 15:04:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:38.070 15:04:53 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:38.070 15:04:53 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:38.070 15:04:53 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:38.070 15:04:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:38.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:17:38.070 00:17:38.070 --- 10.0.0.2 ping statistics --- 00:17:38.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.070 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:38.070 15:04:53 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:38.070 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:38.070 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:17:38.070 00:17:38.070 --- 10.0.0.3 ping statistics --- 00:17:38.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.070 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:38.070 15:04:53 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:38.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:38.070 00:17:38.070 --- 10.0.0.1 ping statistics --- 00:17:38.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.070 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:38.070 15:04:53 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.070 15:04:53 -- nvmf/common.sh@422 -- # return 0 00:17:38.070 15:04:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:38.070 15:04:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.070 15:04:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:38.070 15:04:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:38.070 15:04:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.070 15:04:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:38.070 15:04:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:38.070 15:04:53 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:38.070 15:04:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:38.070 15:04:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:38.070 15:04:53 -- common/autotest_common.sh@10 -- # set +x 00:17:38.329 15:04:53 -- nvmf/common.sh@470 -- # nvmfpid=67203 00:17:38.329 15:04:53 -- nvmf/common.sh@471 -- # waitforlisten 67203 00:17:38.329 15:04:53 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:38.329 15:04:53 -- common/autotest_common.sh@817 -- # '[' -z 67203 ']' 00:17:38.329 15:04:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.329 15:04:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:38.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.329 15:04:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.329 15:04:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:38.329 15:04:53 -- common/autotest_common.sh@10 -- # set +x 00:17:38.329 [2024-04-18 15:04:53.833056] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:17:38.329 [2024-04-18 15:04:53.833130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.329 [2024-04-18 15:04:53.977160] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:38.604 [2024-04-18 15:04:54.061444] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.604 [2024-04-18 15:04:54.061506] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.604 [2024-04-18 15:04:54.061516] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.604 [2024-04-18 15:04:54.061525] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.604 [2024-04-18 15:04:54.061532] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.604 [2024-04-18 15:04:54.061760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.604 [2024-04-18 15:04:54.061944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.604 [2024-04-18 15:04:54.062689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.604 [2024-04-18 15:04:54.062690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.174 15:04:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:39.174 15:04:54 -- common/autotest_common.sh@850 -- # return 0 00:17:39.174 15:04:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:39.174 15:04:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:39.174 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:17:39.174 15:04:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.174 15:04:54 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:39.174 15:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.174 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:17:39.174 15:04:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.174 15:04:54 -- target/rpc.sh@26 -- # stats='{ 00:17:39.174 "poll_groups": [ 00:17:39.174 { 00:17:39.174 "admin_qpairs": 0, 00:17:39.174 "completed_nvme_io": 0, 00:17:39.174 "current_admin_qpairs": 0, 00:17:39.174 "current_io_qpairs": 0, 00:17:39.174 "io_qpairs": 0, 00:17:39.174 "name": "nvmf_tgt_poll_group_0", 00:17:39.174 "pending_bdev_io": 0, 00:17:39.174 "transports": [] 00:17:39.174 }, 00:17:39.174 { 00:17:39.174 "admin_qpairs": 0, 00:17:39.174 "completed_nvme_io": 0, 00:17:39.174 "current_admin_qpairs": 0, 00:17:39.174 "current_io_qpairs": 0, 00:17:39.174 "io_qpairs": 0, 00:17:39.174 "name": "nvmf_tgt_poll_group_1", 00:17:39.174 "pending_bdev_io": 0, 00:17:39.174 "transports": [] 00:17:39.174 }, 00:17:39.174 { 00:17:39.174 "admin_qpairs": 0, 00:17:39.174 "completed_nvme_io": 0, 00:17:39.174 "current_admin_qpairs": 0, 00:17:39.174 "current_io_qpairs": 0, 00:17:39.174 "io_qpairs": 0, 00:17:39.174 "name": "nvmf_tgt_poll_group_2", 00:17:39.174 "pending_bdev_io": 0, 00:17:39.174 "transports": [] 00:17:39.174 }, 00:17:39.174 { 00:17:39.174 "admin_qpairs": 0, 00:17:39.174 "completed_nvme_io": 0, 00:17:39.174 "current_admin_qpairs": 0, 00:17:39.174 "current_io_qpairs": 0, 00:17:39.174 "io_qpairs": 0, 00:17:39.174 "name": "nvmf_tgt_poll_group_3", 00:17:39.174 "pending_bdev_io": 0, 00:17:39.174 "transports": [] 00:17:39.174 } 00:17:39.174 ], 00:17:39.174 "tick_rate": 2490000000 00:17:39.174 }' 00:17:39.174 15:04:54 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:39.174 15:04:54 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:39.174 15:04:54 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:39.174 15:04:54 -- target/rpc.sh@15 -- # wc -l 00:17:39.174 15:04:54 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:39.174 15:04:54 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:39.174 15:04:54 -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:39.174 15:04:54 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:39.174 15:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.433 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:17:39.433 [2024-04-18 15:04:54.883031] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.433 15:04:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.433 15:04:54 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:39.433 15:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.433 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:17:39.433 15:04:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.434 15:04:54 -- target/rpc.sh@33 -- # stats='{ 00:17:39.434 "poll_groups": [ 00:17:39.434 { 00:17:39.434 "admin_qpairs": 0, 00:17:39.434 "completed_nvme_io": 0, 00:17:39.434 "current_admin_qpairs": 0, 00:17:39.434 "current_io_qpairs": 0, 00:17:39.434 "io_qpairs": 0, 00:17:39.434 "name": "nvmf_tgt_poll_group_0", 00:17:39.434 "pending_bdev_io": 0, 00:17:39.434 "transports": [ 00:17:39.434 { 00:17:39.434 "trtype": "TCP" 00:17:39.434 } 00:17:39.434 ] 00:17:39.434 }, 00:17:39.434 { 00:17:39.434 "admin_qpairs": 0, 00:17:39.434 "completed_nvme_io": 0, 00:17:39.434 "current_admin_qpairs": 0, 00:17:39.434 "current_io_qpairs": 0, 00:17:39.434 "io_qpairs": 0, 00:17:39.434 "name": "nvmf_tgt_poll_group_1", 00:17:39.434 "pending_bdev_io": 0, 00:17:39.434 "transports": [ 00:17:39.434 { 00:17:39.434 "trtype": "TCP" 00:17:39.434 } 00:17:39.434 ] 00:17:39.434 }, 00:17:39.434 { 00:17:39.434 "admin_qpairs": 0, 00:17:39.434 "completed_nvme_io": 0, 00:17:39.434 "current_admin_qpairs": 0, 00:17:39.434 "current_io_qpairs": 0, 00:17:39.434 "io_qpairs": 0, 00:17:39.434 "name": "nvmf_tgt_poll_group_2", 00:17:39.434 "pending_bdev_io": 0, 00:17:39.434 "transports": [ 00:17:39.434 { 00:17:39.434 "trtype": "TCP" 00:17:39.434 } 00:17:39.434 ] 00:17:39.434 }, 00:17:39.434 { 00:17:39.434 "admin_qpairs": 0, 00:17:39.434 "completed_nvme_io": 0, 00:17:39.434 "current_admin_qpairs": 0, 00:17:39.434 "current_io_qpairs": 0, 00:17:39.434 "io_qpairs": 0, 00:17:39.434 "name": "nvmf_tgt_poll_group_3", 00:17:39.434 "pending_bdev_io": 0, 00:17:39.434 "transports": [ 00:17:39.434 { 00:17:39.434 "trtype": "TCP" 00:17:39.434 } 00:17:39.434 ] 00:17:39.434 } 00:17:39.434 ], 00:17:39.434 "tick_rate": 2490000000 00:17:39.434 }' 00:17:39.434 15:04:54 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:39.434 15:04:54 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:39.434 15:04:54 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:39.434 15:04:54 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:39.434 15:04:54 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:39.434 15:04:54 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:39.434 15:04:54 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:39.434 15:04:54 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:39.434 15:04:54 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:39.434 15:04:55 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:39.434 15:04:55 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:39.434 15:04:55 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:39.434 15:04:55 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:39.434 15:04:55 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:39.434 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.434 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:17:39.434 Malloc1 00:17:39.434 15:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.434 15:04:55 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:39.434 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.434 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:17:39.434 15:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.434 15:04:55 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:39.434 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.434 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:17:39.434 15:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.434 15:04:55 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:39.434 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.434 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:17:39.434 15:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.434 15:04:55 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.434 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.434 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:17:39.434 [2024-04-18 15:04:55.101420] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.434 15:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.434 15:04:55 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -a 10.0.0.2 -s 4420 00:17:39.434 15:04:55 -- common/autotest_common.sh@638 -- # local es=0 00:17:39.434 15:04:55 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -a 10.0.0.2 -s 4420 00:17:39.434 15:04:55 -- common/autotest_common.sh@626 -- # local arg=nvme 00:17:39.434 15:04:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:39.434 15:04:55 -- common/autotest_common.sh@630 -- # type -t nvme 00:17:39.434 15:04:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:39.434 15:04:55 -- common/autotest_common.sh@632 -- # type -P nvme 00:17:39.434 15:04:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:39.434 15:04:55 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:17:39.434 15:04:55 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:17:39.434 15:04:55 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -a 10.0.0.2 -s 4420 00:17:39.434 [2024-04-18 15:04:55.133753] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd' 00:17:39.693 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:39.693 could not add new controller: failed to write to nvme-fabrics device 00:17:39.693 15:04:55 -- common/autotest_common.sh@641 -- # es=1 00:17:39.693 15:04:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:39.693 15:04:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:39.693 15:04:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:39.693 15:04:55 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:17:39.693 15:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:39.693 15:04:55 -- common/autotest_common.sh@10 -- # set +x 00:17:39.693 15:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:39.693 15:04:55 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:39.693 15:04:55 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:39.693 15:04:55 -- common/autotest_common.sh@1184 -- # local i=0 00:17:39.693 15:04:55 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:39.693 15:04:55 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:39.693 15:04:55 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:42.228 15:04:57 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:42.228 15:04:57 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:42.229 15:04:57 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:42.229 15:04:57 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:42.229 15:04:57 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:42.229 15:04:57 -- common/autotest_common.sh@1194 -- # return 0 00:17:42.229 15:04:57 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:42.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.229 15:04:57 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:42.229 15:04:57 -- common/autotest_common.sh@1205 -- # local i=0 00:17:42.229 15:04:57 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:42.229 15:04:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:42.229 15:04:57 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:42.229 15:04:57 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:42.229 15:04:57 -- common/autotest_common.sh@1217 -- # return 0 00:17:42.229 15:04:57 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:17:42.229 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.229 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:17:42.229 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.229 15:04:57 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:42.229 15:04:57 -- common/autotest_common.sh@638 -- # local es=0 00:17:42.229 15:04:57 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:42.229 15:04:57 -- common/autotest_common.sh@626 -- # local arg=nvme 00:17:42.229 15:04:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:42.229 15:04:57 -- common/autotest_common.sh@630 -- # type -t nvme 00:17:42.229 15:04:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:42.229 15:04:57 -- common/autotest_common.sh@632 -- # type -P nvme 00:17:42.229 15:04:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:42.229 15:04:57 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:17:42.229 15:04:57 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:17:42.229 15:04:57 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:42.229 [2024-04-18 15:04:57.481218] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd' 00:17:42.229 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:42.229 could not add new controller: failed to write to nvme-fabrics device 00:17:42.229 15:04:57 -- common/autotest_common.sh@641 -- # es=1 00:17:42.229 15:04:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:42.229 15:04:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:42.229 15:04:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:42.229 15:04:57 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:42.229 15:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.229 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:17:42.229 15:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.229 15:04:57 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:42.229 15:04:57 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:42.229 15:04:57 -- common/autotest_common.sh@1184 -- # local i=0 00:17:42.229 15:04:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:42.229 15:04:57 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:42.229 15:04:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:44.137 15:04:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:44.137 15:04:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:44.137 15:04:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:44.137 15:04:59 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:44.137 15:04:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:44.137 15:04:59 -- common/autotest_common.sh@1194 -- # return 0 00:17:44.137 15:04:59 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:44.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.137 15:04:59 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:44.137 15:04:59 -- common/autotest_common.sh@1205 -- # local i=0 00:17:44.137 15:04:59 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:44.137 15:04:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:44.137 15:04:59 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:44.137 15:04:59 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:44.137 15:04:59 -- common/autotest_common.sh@1217 -- # return 0 00:17:44.137 15:04:59 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:44.137 15:04:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:44.137 15:04:59 -- common/autotest_common.sh@10 -- # set +x 00:17:44.137 15:04:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:44.137 15:04:59 -- target/rpc.sh@81 -- # seq 1 5 00:17:44.137 15:04:59 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:44.137 15:04:59 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:44.137 15:04:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:44.137 15:04:59 -- common/autotest_common.sh@10 -- # set +x 00:17:44.137 15:04:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:44.137 15:04:59 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:44.137 15:04:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:44.137 15:04:59 -- common/autotest_common.sh@10 -- # set +x 00:17:44.137 [2024-04-18 15:04:59.832741] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.137 15:04:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:44.137 15:04:59 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:44.137 15:04:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:44.137 15:04:59 -- common/autotest_common.sh@10 -- # set +x 00:17:44.396 15:04:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:44.396 15:04:59 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:44.396 15:04:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:44.396 15:04:59 -- common/autotest_common.sh@10 -- # set +x 00:17:44.396 15:04:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:44.396 15:04:59 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:44.396 15:05:00 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:44.396 15:05:00 -- common/autotest_common.sh@1184 -- # local i=0 00:17:44.396 15:05:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:44.396 15:05:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:44.396 15:05:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:46.934 15:05:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:46.934 15:05:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:46.934 15:05:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:46.934 15:05:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:46.934 15:05:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:46.934 15:05:02 -- common/autotest_common.sh@1194 -- # return 0 00:17:46.934 15:05:02 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:46.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.934 15:05:02 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:46.934 15:05:02 -- common/autotest_common.sh@1205 -- # local i=0 00:17:46.934 15:05:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:46.934 15:05:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:46.934 15:05:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:46.934 15:05:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:46.934 15:05:02 -- common/autotest_common.sh@1217 -- # return 0 00:17:46.934 15:05:02 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:46.934 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.934 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:17:46.934 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.934 15:05:02 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:46.934 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.934 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:17:46.934 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.934 15:05:02 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:46.934 15:05:02 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:46.934 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.934 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:17:46.934 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.934 15:05:02 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.934 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.934 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:17:46.934 [2024-04-18 15:05:02.168455] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.934 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.934 15:05:02 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:46.934 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.934 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:17:46.934 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.934 15:05:02 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:46.934 15:05:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.934 15:05:02 -- common/autotest_common.sh@10 -- # set +x 00:17:46.934 15:05:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.934 15:05:02 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:46.934 15:05:02 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:46.934 15:05:02 -- common/autotest_common.sh@1184 -- # local i=0 00:17:46.934 15:05:02 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:46.934 15:05:02 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:46.934 15:05:02 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:48.843 15:05:04 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:48.843 15:05:04 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:48.843 15:05:04 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:48.843 15:05:04 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:48.843 15:05:04 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:48.843 15:05:04 -- common/autotest_common.sh@1194 -- # return 0 00:17:48.843 15:05:04 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:49.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.101 15:05:04 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:49.101 15:05:04 -- common/autotest_common.sh@1205 -- # local i=0 00:17:49.101 15:05:04 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:49.101 15:05:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:49.101 15:05:04 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:49.101 15:05:04 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:49.101 15:05:04 -- common/autotest_common.sh@1217 -- # return 0 00:17:49.101 15:05:04 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:49.101 15:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.101 15:05:04 -- common/autotest_common.sh@10 -- # set +x 00:17:49.101 15:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.101 15:05:04 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:49.101 15:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.101 15:05:04 -- common/autotest_common.sh@10 -- # set +x 00:17:49.101 15:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.101 15:05:04 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:49.101 15:05:04 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:49.101 15:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.101 15:05:04 -- common/autotest_common.sh@10 -- # set +x 00:17:49.101 15:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.101 15:05:04 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.101 15:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.101 15:05:04 -- common/autotest_common.sh@10 -- # set +x 00:17:49.101 [2024-04-18 15:05:04.615993] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.101 15:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.101 15:05:04 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:49.101 15:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.101 15:05:04 -- common/autotest_common.sh@10 -- # set +x 00:17:49.101 15:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.101 15:05:04 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:49.101 15:05:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.101 15:05:04 -- common/autotest_common.sh@10 -- # set +x 00:17:49.101 15:05:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.101 15:05:04 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:49.359 15:05:04 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:49.359 15:05:04 -- common/autotest_common.sh@1184 -- # local i=0 00:17:49.359 15:05:04 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:49.359 15:05:04 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:49.359 15:05:04 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:51.262 15:05:06 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:51.262 15:05:06 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:51.262 15:05:06 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:51.262 15:05:06 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:51.262 15:05:06 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:51.262 15:05:06 -- common/autotest_common.sh@1194 -- # return 0 00:17:51.262 15:05:06 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.262 15:05:06 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:51.262 15:05:06 -- common/autotest_common.sh@1205 -- # local i=0 00:17:51.262 15:05:06 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:51.262 15:05:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.262 15:05:06 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.262 15:05:06 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:51.262 15:05:06 -- common/autotest_common.sh@1217 -- # return 0 00:17:51.262 15:05:06 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:51.262 15:05:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:51.262 15:05:06 -- common/autotest_common.sh@10 -- # set +x 00:17:51.262 15:05:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:51.262 15:05:06 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.262 15:05:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:51.262 15:05:06 -- common/autotest_common.sh@10 -- # set +x 00:17:51.262 15:05:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:51.262 15:05:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:51.262 15:05:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:51.262 15:05:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:51.262 15:05:06 -- common/autotest_common.sh@10 -- # set +x 00:17:51.262 15:05:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:51.262 15:05:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.262 15:05:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:51.262 15:05:06 -- common/autotest_common.sh@10 -- # set +x 00:17:51.522 [2024-04-18 15:05:06.971786] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.522 15:05:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:51.522 15:05:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:51.522 15:05:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:51.522 15:05:06 -- common/autotest_common.sh@10 -- # set +x 00:17:51.522 15:05:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:51.522 15:05:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:51.522 15:05:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:51.522 15:05:06 -- common/autotest_common.sh@10 -- # set +x 00:17:51.522 15:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:51.522 15:05:07 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:51.522 15:05:07 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:51.522 15:05:07 -- common/autotest_common.sh@1184 -- # local i=0 00:17:51.522 15:05:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:51.522 15:05:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:51.522 15:05:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:53.499 15:05:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:53.499 15:05:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:53.499 15:05:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:53.758 15:05:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:53.758 15:05:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:53.758 15:05:09 -- common/autotest_common.sh@1194 -- # return 0 00:17:53.758 15:05:09 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:53.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:53.758 15:05:09 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:53.758 15:05:09 -- common/autotest_common.sh@1205 -- # local i=0 00:17:53.758 15:05:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:53.758 15:05:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:53.758 15:05:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:53.758 15:05:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:53.758 15:05:09 -- common/autotest_common.sh@1217 -- # return 0 00:17:53.758 15:05:09 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:53.758 15:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:53.758 15:05:09 -- common/autotest_common.sh@10 -- # set +x 00:17:53.758 15:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:53.758 15:05:09 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:53.758 15:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:53.758 15:05:09 -- common/autotest_common.sh@10 -- # set +x 00:17:53.758 15:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:53.758 15:05:09 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:53.758 15:05:09 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:53.758 15:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:53.758 15:05:09 -- common/autotest_common.sh@10 -- # set +x 00:17:53.758 15:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:53.758 15:05:09 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.758 15:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:53.758 15:05:09 -- common/autotest_common.sh@10 -- # set +x 00:17:53.758 [2024-04-18 15:05:09.324013] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.758 15:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:53.758 15:05:09 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:53.758 15:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:53.758 15:05:09 -- common/autotest_common.sh@10 -- # set +x 00:17:53.758 15:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:53.758 15:05:09 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:53.758 15:05:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:53.758 15:05:09 -- common/autotest_common.sh@10 -- # set +x 00:17:53.758 15:05:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:53.758 15:05:09 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:54.017 15:05:09 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:54.017 15:05:09 -- common/autotest_common.sh@1184 -- # local i=0 00:17:54.017 15:05:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:54.017 15:05:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:54.017 15:05:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:55.929 15:05:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:55.929 15:05:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:55.929 15:05:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:55.929 15:05:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:55.929 15:05:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:55.929 15:05:11 -- common/autotest_common.sh@1194 -- # return 0 00:17:55.929 15:05:11 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:55.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.929 15:05:11 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:55.929 15:05:11 -- common/autotest_common.sh@1205 -- # local i=0 00:17:55.929 15:05:11 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:55.929 15:05:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.188 15:05:11 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:56.188 15:05:11 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.188 15:05:11 -- common/autotest_common.sh@1217 -- # return 0 00:17:56.188 15:05:11 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:56.188 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.188 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.188 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.188 15:05:11 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.188 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.188 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.188 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.188 15:05:11 -- target/rpc.sh@99 -- # seq 1 5 00:17:56.188 15:05:11 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:56.188 15:05:11 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:56.188 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.188 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.188 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.188 15:05:11 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 [2024-04-18 15:05:11.695968] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:56.189 15:05:11 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 [2024-04-18 15:05:11.755919] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:56.189 15:05:11 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 [2024-04-18 15:05:11.823877] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:56.189 15:05:11 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 [2024-04-18 15:05:11.883811] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.189 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.189 15:05:11 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:56.189 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.189 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.448 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.448 15:05:11 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:56.448 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.448 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.448 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.448 15:05:11 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:56.448 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.448 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.448 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.448 15:05:11 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.448 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.448 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.448 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.448 15:05:11 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:56.448 15:05:11 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:56.448 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.448 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.448 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.448 15:05:11 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.448 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.448 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.448 [2024-04-18 15:05:11.947750] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.448 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.449 15:05:11 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:56.449 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.449 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.449 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.449 15:05:11 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:56.449 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.449 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.449 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.449 15:05:11 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:56.449 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.449 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.449 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.449 15:05:11 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.449 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.449 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.449 15:05:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.449 15:05:11 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:56.449 15:05:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:56.449 15:05:11 -- common/autotest_common.sh@10 -- # set +x 00:17:56.449 15:05:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:56.449 15:05:12 -- target/rpc.sh@110 -- # stats='{ 00:17:56.449 "poll_groups": [ 00:17:56.449 { 00:17:56.449 "admin_qpairs": 2, 00:17:56.449 "completed_nvme_io": 66, 00:17:56.449 "current_admin_qpairs": 0, 00:17:56.449 "current_io_qpairs": 0, 00:17:56.449 "io_qpairs": 16, 00:17:56.449 "name": "nvmf_tgt_poll_group_0", 00:17:56.449 "pending_bdev_io": 0, 00:17:56.449 "transports": [ 00:17:56.449 { 00:17:56.449 "trtype": "TCP" 00:17:56.449 } 00:17:56.449 ] 00:17:56.449 }, 00:17:56.449 { 00:17:56.449 "admin_qpairs": 3, 00:17:56.449 "completed_nvme_io": 117, 00:17:56.449 "current_admin_qpairs": 0, 00:17:56.449 "current_io_qpairs": 0, 00:17:56.449 "io_qpairs": 17, 00:17:56.449 "name": "nvmf_tgt_poll_group_1", 00:17:56.449 "pending_bdev_io": 0, 00:17:56.449 "transports": [ 00:17:56.449 { 00:17:56.449 "trtype": "TCP" 00:17:56.449 } 00:17:56.449 ] 00:17:56.449 }, 00:17:56.449 { 00:17:56.449 "admin_qpairs": 1, 00:17:56.449 "completed_nvme_io": 168, 00:17:56.449 "current_admin_qpairs": 0, 00:17:56.449 "current_io_qpairs": 0, 00:17:56.449 "io_qpairs": 19, 00:17:56.449 "name": "nvmf_tgt_poll_group_2", 00:17:56.449 "pending_bdev_io": 0, 00:17:56.449 "transports": [ 00:17:56.449 { 00:17:56.449 "trtype": "TCP" 00:17:56.449 } 00:17:56.449 ] 00:17:56.449 }, 00:17:56.449 { 00:17:56.449 "admin_qpairs": 1, 00:17:56.449 "completed_nvme_io": 69, 00:17:56.449 "current_admin_qpairs": 0, 00:17:56.449 "current_io_qpairs": 0, 00:17:56.449 "io_qpairs": 18, 00:17:56.449 "name": "nvmf_tgt_poll_group_3", 00:17:56.449 "pending_bdev_io": 0, 00:17:56.449 "transports": [ 00:17:56.449 { 00:17:56.449 "trtype": "TCP" 00:17:56.449 } 00:17:56.449 ] 00:17:56.449 } 00:17:56.449 ], 00:17:56.449 "tick_rate": 2490000000 00:17:56.449 }' 00:17:56.449 15:05:12 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:56.449 15:05:12 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:56.449 15:05:12 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:56.449 15:05:12 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:56.449 15:05:12 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:56.449 15:05:12 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:56.449 15:05:12 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:56.449 15:05:12 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:56.449 15:05:12 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:56.449 15:05:12 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:17:56.449 15:05:12 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:56.449 15:05:12 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:56.449 15:05:12 -- target/rpc.sh@123 -- # nvmftestfini 00:17:56.449 15:05:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:56.449 15:05:12 -- nvmf/common.sh@117 -- # sync 00:17:56.449 15:05:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:56.449 15:05:12 -- nvmf/common.sh@120 -- # set +e 00:17:56.449 15:05:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:56.449 15:05:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:56.449 rmmod nvme_tcp 00:17:56.707 rmmod nvme_fabrics 00:17:56.707 rmmod nvme_keyring 00:17:56.707 15:05:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:56.707 15:05:12 -- nvmf/common.sh@124 -- # set -e 00:17:56.707 15:05:12 -- nvmf/common.sh@125 -- # return 0 00:17:56.707 15:05:12 -- nvmf/common.sh@478 -- # '[' -n 67203 ']' 00:17:56.707 15:05:12 -- nvmf/common.sh@479 -- # killprocess 67203 00:17:56.707 15:05:12 -- common/autotest_common.sh@936 -- # '[' -z 67203 ']' 00:17:56.707 15:05:12 -- common/autotest_common.sh@940 -- # kill -0 67203 00:17:56.707 15:05:12 -- common/autotest_common.sh@941 -- # uname 00:17:56.707 15:05:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:56.707 15:05:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67203 00:17:56.707 killing process with pid 67203 00:17:56.707 15:05:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:56.707 15:05:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:56.707 15:05:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67203' 00:17:56.707 15:05:12 -- common/autotest_common.sh@955 -- # kill 67203 00:17:56.707 15:05:12 -- common/autotest_common.sh@960 -- # wait 67203 00:17:56.965 15:05:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:56.965 15:05:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:56.965 15:05:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:56.965 15:05:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.965 15:05:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:56.965 15:05:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.965 15:05:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.965 15:05:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.965 15:05:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:56.965 00:17:56.965 real 0m19.342s 00:17:56.965 user 1m11.694s 00:17:56.965 sys 0m3.634s 00:17:56.965 15:05:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:56.965 15:05:12 -- common/autotest_common.sh@10 -- # set +x 00:17:56.965 ************************************ 00:17:56.965 END TEST nvmf_rpc 00:17:56.965 ************************************ 00:17:56.965 15:05:12 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:56.965 15:05:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:56.965 15:05:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:56.965 15:05:12 -- common/autotest_common.sh@10 -- # set +x 00:17:57.224 ************************************ 00:17:57.224 START TEST nvmf_invalid 00:17:57.224 ************************************ 00:17:57.224 15:05:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:57.224 * Looking for test storage... 00:17:57.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:57.224 15:05:12 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:57.224 15:05:12 -- nvmf/common.sh@7 -- # uname -s 00:17:57.224 15:05:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.224 15:05:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.224 15:05:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.224 15:05:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.224 15:05:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.224 15:05:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.224 15:05:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.224 15:05:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.224 15:05:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.224 15:05:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.224 15:05:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:17:57.224 15:05:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:17:57.224 15:05:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.224 15:05:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.224 15:05:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:57.224 15:05:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.224 15:05:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:57.224 15:05:12 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.224 15:05:12 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.224 15:05:12 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.224 15:05:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.224 15:05:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.224 15:05:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.224 15:05:12 -- paths/export.sh@5 -- # export PATH 00:17:57.224 15:05:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.224 15:05:12 -- nvmf/common.sh@47 -- # : 0 00:17:57.224 15:05:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:57.224 15:05:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:57.224 15:05:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.224 15:05:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.224 15:05:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.224 15:05:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:57.224 15:05:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:57.224 15:05:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:57.224 15:05:12 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:17:57.224 15:05:12 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.224 15:05:12 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:57.224 15:05:12 -- target/invalid.sh@14 -- # target=foobar 00:17:57.224 15:05:12 -- target/invalid.sh@16 -- # RANDOM=0 00:17:57.224 15:05:12 -- target/invalid.sh@34 -- # nvmftestinit 00:17:57.224 15:05:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:57.224 15:05:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.224 15:05:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:57.224 15:05:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:57.224 15:05:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:57.224 15:05:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.224 15:05:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.224 15:05:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.224 15:05:12 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:57.224 15:05:12 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:57.224 15:05:12 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:57.224 15:05:12 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:57.224 15:05:12 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:57.224 15:05:12 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:57.224 15:05:12 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.224 15:05:12 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.224 15:05:12 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:57.224 15:05:12 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:57.224 15:05:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:57.224 15:05:12 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:57.224 15:05:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:57.224 15:05:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.224 15:05:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:57.224 15:05:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:57.224 15:05:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:57.224 15:05:12 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:57.224 15:05:12 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:57.224 15:05:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:57.224 Cannot find device "nvmf_tgt_br" 00:17:57.224 15:05:12 -- nvmf/common.sh@155 -- # true 00:17:57.224 15:05:12 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.224 Cannot find device "nvmf_tgt_br2" 00:17:57.224 15:05:12 -- nvmf/common.sh@156 -- # true 00:17:57.224 15:05:12 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:57.224 15:05:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:57.483 Cannot find device "nvmf_tgt_br" 00:17:57.483 15:05:12 -- nvmf/common.sh@158 -- # true 00:17:57.483 15:05:12 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:57.483 Cannot find device "nvmf_tgt_br2" 00:17:57.483 15:05:12 -- nvmf/common.sh@159 -- # true 00:17:57.483 15:05:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:57.483 15:05:12 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:57.483 15:05:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.483 15:05:13 -- nvmf/common.sh@162 -- # true 00:17:57.483 15:05:13 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.483 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.483 15:05:13 -- nvmf/common.sh@163 -- # true 00:17:57.483 15:05:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:57.483 15:05:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:57.483 15:05:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:57.483 15:05:13 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:57.483 15:05:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:57.483 15:05:13 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:57.483 15:05:13 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:57.483 15:05:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:57.483 15:05:13 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:57.483 15:05:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:57.483 15:05:13 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:57.483 15:05:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:57.483 15:05:13 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:57.483 15:05:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:57.483 15:05:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:57.483 15:05:13 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:57.483 15:05:13 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:57.483 15:05:13 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:57.483 15:05:13 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:57.483 15:05:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:57.742 15:05:13 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:57.742 15:05:13 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:57.742 15:05:13 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:57.742 15:05:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:57.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:17:57.742 00:17:57.742 --- 10.0.0.2 ping statistics --- 00:17:57.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.742 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:17:57.742 15:05:13 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:57.742 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:57.742 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:17:57.742 00:17:57.742 --- 10.0.0.3 ping statistics --- 00:17:57.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.742 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:57.742 15:05:13 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:57.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:17:57.742 00:17:57.742 --- 10.0.0.1 ping statistics --- 00:17:57.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.742 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:57.742 15:05:13 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.742 15:05:13 -- nvmf/common.sh@422 -- # return 0 00:17:57.742 15:05:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:57.742 15:05:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.742 15:05:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:57.742 15:05:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:57.742 15:05:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.742 15:05:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:57.742 15:05:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:57.742 15:05:13 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:57.742 15:05:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:57.742 15:05:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:57.742 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:17:57.742 15:05:13 -- nvmf/common.sh@470 -- # nvmfpid=67730 00:17:57.742 15:05:13 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:57.742 15:05:13 -- nvmf/common.sh@471 -- # waitforlisten 67730 00:17:57.742 15:05:13 -- common/autotest_common.sh@817 -- # '[' -z 67730 ']' 00:17:57.742 15:05:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.742 15:05:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:57.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.742 15:05:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.742 15:05:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:57.742 15:05:13 -- common/autotest_common.sh@10 -- # set +x 00:17:57.742 [2024-04-18 15:05:13.345009] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:17:57.742 [2024-04-18 15:05:13.345078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.000 [2024-04-18 15:05:13.488489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.000 [2024-04-18 15:05:13.567403] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.000 [2024-04-18 15:05:13.567463] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.000 [2024-04-18 15:05:13.567474] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.000 [2024-04-18 15:05:13.567483] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.000 [2024-04-18 15:05:13.567490] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.000 [2024-04-18 15:05:13.567713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.000 [2024-04-18 15:05:13.567842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.000 [2024-04-18 15:05:13.568835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.000 [2024-04-18 15:05:13.568836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.566 15:05:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:58.566 15:05:14 -- common/autotest_common.sh@850 -- # return 0 00:17:58.566 15:05:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:58.566 15:05:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:58.566 15:05:14 -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 15:05:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.566 15:05:14 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:58.566 15:05:14 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6754 00:17:58.825 [2024-04-18 15:05:14.428219] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:58.825 15:05:14 -- target/invalid.sh@40 -- # out='2024/04/18 15:05:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6754 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:17:58.825 request: 00:17:58.825 { 00:17:58.825 "method": "nvmf_create_subsystem", 00:17:58.825 "params": { 00:17:58.825 "nqn": "nqn.2016-06.io.spdk:cnode6754", 00:17:58.825 "tgt_name": "foobar" 00:17:58.825 } 00:17:58.825 } 00:17:58.825 Got JSON-RPC error response 00:17:58.825 GoRPCClient: error on JSON-RPC call' 00:17:58.825 15:05:14 -- target/invalid.sh@41 -- # [[ 2024/04/18 15:05:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6754 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:17:58.825 request: 00:17:58.825 { 00:17:58.825 "method": "nvmf_create_subsystem", 00:17:58.825 "params": { 00:17:58.825 "nqn": "nqn.2016-06.io.spdk:cnode6754", 00:17:58.825 "tgt_name": "foobar" 00:17:58.825 } 00:17:58.825 } 00:17:58.825 Got JSON-RPC error response 00:17:58.825 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:58.825 15:05:14 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:58.825 15:05:14 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19795 00:17:59.084 [2024-04-18 15:05:14.632241] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19795: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:59.084 15:05:14 -- target/invalid.sh@45 -- # out='2024/04/18 15:05:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19795 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:17:59.084 request: 00:17:59.084 { 00:17:59.084 "method": "nvmf_create_subsystem", 00:17:59.084 "params": { 00:17:59.084 "nqn": "nqn.2016-06.io.spdk:cnode19795", 00:17:59.084 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:17:59.084 } 00:17:59.084 } 00:17:59.084 Got JSON-RPC error response 00:17:59.084 GoRPCClient: error on JSON-RPC call' 00:17:59.084 15:05:14 -- target/invalid.sh@46 -- # [[ 2024/04/18 15:05:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19795 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:17:59.084 request: 00:17:59.084 { 00:17:59.084 "method": "nvmf_create_subsystem", 00:17:59.084 "params": { 00:17:59.084 "nqn": "nqn.2016-06.io.spdk:cnode19795", 00:17:59.084 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:17:59.084 } 00:17:59.084 } 00:17:59.084 Got JSON-RPC error response 00:17:59.084 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:59.084 15:05:14 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:59.084 15:05:14 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30053 00:17:59.343 [2024-04-18 15:05:14.836217] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30053: invalid model number 'SPDK_Controller' 00:17:59.343 15:05:14 -- target/invalid.sh@50 -- # out='2024/04/18 15:05:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode30053], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:17:59.343 request: 00:17:59.343 { 00:17:59.343 "method": "nvmf_create_subsystem", 00:17:59.343 "params": { 00:17:59.343 "nqn": "nqn.2016-06.io.spdk:cnode30053", 00:17:59.343 "model_number": "SPDK_Controller\u001f" 00:17:59.343 } 00:17:59.343 } 00:17:59.343 Got JSON-RPC error response 00:17:59.343 GoRPCClient: error on JSON-RPC call' 00:17:59.343 15:05:14 -- target/invalid.sh@51 -- # [[ 2024/04/18 15:05:14 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode30053], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:17:59.343 request: 00:17:59.343 { 00:17:59.343 "method": "nvmf_create_subsystem", 00:17:59.343 "params": { 00:17:59.343 "nqn": "nqn.2016-06.io.spdk:cnode30053", 00:17:59.343 "model_number": "SPDK_Controller\u001f" 00:17:59.343 } 00:17:59.343 } 00:17:59.343 Got JSON-RPC error response 00:17:59.343 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:59.343 15:05:14 -- target/invalid.sh@54 -- # gen_random_s 21 00:17:59.343 15:05:14 -- target/invalid.sh@19 -- # local length=21 ll 00:17:59.343 15:05:14 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:59.343 15:05:14 -- target/invalid.sh@21 -- # local chars 00:17:59.343 15:05:14 -- target/invalid.sh@22 -- # local string 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 84 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=T 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 67 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=C 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 116 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=t 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 43 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=+ 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 78 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=N 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 52 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=4 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 108 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=l 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 32 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=' ' 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 57 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=9 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 113 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=q 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 111 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=o 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 74 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=J 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 82 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=R 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 54 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=6 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 53 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=5 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 127 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=$'\177' 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 116 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # string+=t 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:14 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:14 -- target/invalid.sh@25 -- # printf %x 58 00:17:59.343 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:59.343 15:05:15 -- target/invalid.sh@25 -- # string+=: 00:17:59.343 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:15 -- target/invalid.sh@25 -- # printf %x 125 00:17:59.343 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:59.343 15:05:15 -- target/invalid.sh@25 -- # string+='}' 00:17:59.343 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:15 -- target/invalid.sh@25 -- # printf %x 82 00:17:59.343 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:59.343 15:05:15 -- target/invalid.sh@25 -- # string+=R 00:17:59.343 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.343 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.343 15:05:15 -- target/invalid.sh@25 -- # printf %x 118 00:17:59.343 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:59.344 15:05:15 -- target/invalid.sh@25 -- # string+=v 00:17:59.344 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.344 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.344 15:05:15 -- target/invalid.sh@28 -- # [[ T == \- ]] 00:17:59.344 15:05:15 -- target/invalid.sh@31 -- # echo 'TCt+N4l 9qoJR65t:}Rv' 00:17:59.344 15:05:15 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'TCt+N4l 9qoJR65t:}Rv' nqn.2016-06.io.spdk:cnode11751 00:17:59.603 [2024-04-18 15:05:15.207987] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11751: invalid serial number 'TCt+N4l 9qoJR65t:}Rv' 00:17:59.603 15:05:15 -- target/invalid.sh@54 -- # out='2024/04/18 15:05:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11751 serial_number:TCt+N4l 9qoJR65t:}Rv], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN TCt+N4l 9qoJR65t:}Rv 00:17:59.603 request: 00:17:59.603 { 00:17:59.603 "method": "nvmf_create_subsystem", 00:17:59.603 "params": { 00:17:59.603 "nqn": "nqn.2016-06.io.spdk:cnode11751", 00:17:59.603 "serial_number": "TCt+N4l 9qoJR65\u007ft:}Rv" 00:17:59.603 } 00:17:59.603 } 00:17:59.603 Got JSON-RPC error response 00:17:59.603 GoRPCClient: error on JSON-RPC call' 00:17:59.603 15:05:15 -- target/invalid.sh@55 -- # [[ 2024/04/18 15:05:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11751 serial_number:TCt+N4l 9qoJR65t:}Rv], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN TCt+N4l 9qoJR65t:}Rv 00:17:59.603 request: 00:17:59.603 { 00:17:59.603 "method": "nvmf_create_subsystem", 00:17:59.603 "params": { 00:17:59.603 "nqn": "nqn.2016-06.io.spdk:cnode11751", 00:17:59.603 "serial_number": "TCt+N4l 9qoJR65\u007ft:}Rv" 00:17:59.603 } 00:17:59.603 } 00:17:59.603 Got JSON-RPC error response 00:17:59.603 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:59.603 15:05:15 -- target/invalid.sh@58 -- # gen_random_s 41 00:17:59.603 15:05:15 -- target/invalid.sh@19 -- # local length=41 ll 00:17:59.603 15:05:15 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:59.603 15:05:15 -- target/invalid.sh@21 -- # local chars 00:17:59.603 15:05:15 -- target/invalid.sh@22 -- # local string 00:17:59.603 15:05:15 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:59.603 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.603 15:05:15 -- target/invalid.sh@25 -- # printf %x 125 00:17:59.603 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:59.603 15:05:15 -- target/invalid.sh@25 -- # string+='}' 00:17:59.603 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.603 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.603 15:05:15 -- target/invalid.sh@25 -- # printf %x 100 00:17:59.603 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:59.603 15:05:15 -- target/invalid.sh@25 -- # string+=d 00:17:59.603 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.603 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.603 15:05:15 -- target/invalid.sh@25 -- # printf %x 39 00:17:59.603 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:59.603 15:05:15 -- target/invalid.sh@25 -- # string+=\' 00:17:59.604 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.604 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # printf %x 90 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # string+=Z 00:17:59.604 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.604 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # printf %x 120 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # string+=x 00:17:59.604 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.604 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # printf %x 116 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # string+=t 00:17:59.604 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.604 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # printf %x 96 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # string+='`' 00:17:59.604 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.604 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # printf %x 122 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # string+=z 00:17:59.604 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.604 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # printf %x 51 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:59.604 15:05:15 -- target/invalid.sh@25 -- # string+=3 00:17:59.604 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.604 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.863 15:05:15 -- target/invalid.sh@25 -- # printf %x 77 00:17:59.863 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:59.863 15:05:15 -- target/invalid.sh@25 -- # string+=M 00:17:59.863 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.863 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.863 15:05:15 -- target/invalid.sh@25 -- # printf %x 95 00:17:59.863 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:59.863 15:05:15 -- target/invalid.sh@25 -- # string+=_ 00:17:59.863 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.863 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.863 15:05:15 -- target/invalid.sh@25 -- # printf %x 68 00:17:59.863 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:59.863 15:05:15 -- target/invalid.sh@25 -- # string+=D 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 76 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=L 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 122 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=z 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 90 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=Z 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 69 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=E 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 91 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+='[' 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 33 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+='!' 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 52 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=4 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 49 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=1 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 41 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=')' 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 61 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+== 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 92 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+='\' 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 51 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=3 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 117 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=u 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 66 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=B 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 120 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=x 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 66 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=B 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 56 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=8 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 37 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=% 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 107 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=k 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 92 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+='\' 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 80 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=P 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 51 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=3 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 75 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=K 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 44 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=, 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 85 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=U 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 42 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+='*' 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 106 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=j 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 107 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=k 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # printf %x 43 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:59.864 15:05:15 -- target/invalid.sh@25 -- # string+=+ 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll++ )) 00:17:59.864 15:05:15 -- target/invalid.sh@24 -- # (( ll < length )) 00:17:59.864 15:05:15 -- target/invalid.sh@28 -- # [[ } == \- ]] 00:17:59.864 15:05:15 -- target/invalid.sh@31 -- # echo '}d'\''Zxt`z3M_DLzZE[!41)=\3uBxB8%k\P3K,U*jk+' 00:17:59.864 15:05:15 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '}d'\''Zxt`z3M_DLzZE[!41)=\3uBxB8%k\P3K,U*jk+' nqn.2016-06.io.spdk:cnode2924 00:18:00.123 [2024-04-18 15:05:15.731645] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2924: invalid model number '}d'Zxt`z3M_DLzZE[!41)=\3uBxB8%k\P3K,U*jk+' 00:18:00.123 15:05:15 -- target/invalid.sh@58 -- # out='2024/04/18 15:05:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:}d'\''Zxt`z3M_DLzZE[!41)=\3uBxB8%k\P3K,U*jk+ nqn:nqn.2016-06.io.spdk:cnode2924], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN }d'\''Zxt`z3M_DLzZE[!41)=\3uBxB8%k\P3K,U*jk+ 00:18:00.123 request: 00:18:00.123 { 00:18:00.123 "method": "nvmf_create_subsystem", 00:18:00.123 "params": { 00:18:00.123 "nqn": "nqn.2016-06.io.spdk:cnode2924", 00:18:00.123 "model_number": "}d'\''Zxt`z3M_DLzZE[!41)=\\3uBxB8%k\\P3K,U*jk+" 00:18:00.123 } 00:18:00.123 } 00:18:00.123 Got JSON-RPC error response 00:18:00.123 GoRPCClient: error on JSON-RPC call' 00:18:00.123 15:05:15 -- target/invalid.sh@59 -- # [[ 2024/04/18 15:05:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:}d'Zxt`z3M_DLzZE[!41)=\3uBxB8%k\P3K,U*jk+ nqn:nqn.2016-06.io.spdk:cnode2924], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN }d'Zxt`z3M_DLzZE[!41)=\3uBxB8%k\P3K,U*jk+ 00:18:00.123 request: 00:18:00.123 { 00:18:00.123 "method": "nvmf_create_subsystem", 00:18:00.123 "params": { 00:18:00.123 "nqn": "nqn.2016-06.io.spdk:cnode2924", 00:18:00.123 "model_number": "}d'Zxt`z3M_DLzZE[!41)=\\3uBxB8%k\\P3K,U*jk+" 00:18:00.123 } 00:18:00.123 } 00:18:00.123 Got JSON-RPC error response 00:18:00.123 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:00.123 15:05:15 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:00.386 [2024-04-18 15:05:15.915671] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.386 15:05:15 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:00.649 15:05:16 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:00.649 15:05:16 -- target/invalid.sh@67 -- # head -n 1 00:18:00.649 15:05:16 -- target/invalid.sh@67 -- # echo '' 00:18:00.649 15:05:16 -- target/invalid.sh@67 -- # IP= 00:18:00.649 15:05:16 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:00.649 [2024-04-18 15:05:16.340683] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:00.907 15:05:16 -- target/invalid.sh@69 -- # out='2024/04/18 15:05:16 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:18:00.907 request: 00:18:00.907 { 00:18:00.907 "method": "nvmf_subsystem_remove_listener", 00:18:00.907 "params": { 00:18:00.907 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:00.907 "listen_address": { 00:18:00.907 "trtype": "tcp", 00:18:00.907 "traddr": "", 00:18:00.907 "trsvcid": "4421" 00:18:00.907 } 00:18:00.907 } 00:18:00.907 } 00:18:00.907 Got JSON-RPC error response 00:18:00.907 GoRPCClient: error on JSON-RPC call' 00:18:00.907 15:05:16 -- target/invalid.sh@70 -- # [[ 2024/04/18 15:05:16 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:18:00.907 request: 00:18:00.907 { 00:18:00.907 "method": "nvmf_subsystem_remove_listener", 00:18:00.907 "params": { 00:18:00.907 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:00.907 "listen_address": { 00:18:00.907 "trtype": "tcp", 00:18:00.907 "traddr": "", 00:18:00.907 "trsvcid": "4421" 00:18:00.907 } 00:18:00.907 } 00:18:00.907 } 00:18:00.907 Got JSON-RPC error response 00:18:00.907 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:00.907 15:05:16 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19117 -i 0 00:18:00.907 [2024-04-18 15:05:16.540530] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19117: invalid cntlid range [0-65519] 00:18:00.907 15:05:16 -- target/invalid.sh@73 -- # out='2024/04/18 15:05:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode19117], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:18:00.907 request: 00:18:00.907 { 00:18:00.907 "method": "nvmf_create_subsystem", 00:18:00.907 "params": { 00:18:00.907 "nqn": "nqn.2016-06.io.spdk:cnode19117", 00:18:00.907 "min_cntlid": 0 00:18:00.907 } 00:18:00.907 } 00:18:00.907 Got JSON-RPC error response 00:18:00.907 GoRPCClient: error on JSON-RPC call' 00:18:00.907 15:05:16 -- target/invalid.sh@74 -- # [[ 2024/04/18 15:05:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode19117], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:18:00.907 request: 00:18:00.907 { 00:18:00.907 "method": "nvmf_create_subsystem", 00:18:00.907 "params": { 00:18:00.907 "nqn": "nqn.2016-06.io.spdk:cnode19117", 00:18:00.907 "min_cntlid": 0 00:18:00.907 } 00:18:00.907 } 00:18:00.907 Got JSON-RPC error response 00:18:00.907 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:00.907 15:05:16 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4116 -i 65520 00:18:01.165 [2024-04-18 15:05:16.776366] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4116: invalid cntlid range [65520-65519] 00:18:01.165 15:05:16 -- target/invalid.sh@75 -- # out='2024/04/18 15:05:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4116], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:18:01.165 request: 00:18:01.165 { 00:18:01.165 "method": "nvmf_create_subsystem", 00:18:01.165 "params": { 00:18:01.165 "nqn": "nqn.2016-06.io.spdk:cnode4116", 00:18:01.165 "min_cntlid": 65520 00:18:01.165 } 00:18:01.165 } 00:18:01.165 Got JSON-RPC error response 00:18:01.165 GoRPCClient: error on JSON-RPC call' 00:18:01.165 15:05:16 -- target/invalid.sh@76 -- # [[ 2024/04/18 15:05:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode4116], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:18:01.165 request: 00:18:01.165 { 00:18:01.165 "method": "nvmf_create_subsystem", 00:18:01.165 "params": { 00:18:01.165 "nqn": "nqn.2016-06.io.spdk:cnode4116", 00:18:01.165 "min_cntlid": 65520 00:18:01.165 } 00:18:01.165 } 00:18:01.165 Got JSON-RPC error response 00:18:01.165 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:01.165 15:05:16 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21450 -I 0 00:18:01.423 [2024-04-18 15:05:16.980207] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21450: invalid cntlid range [1-0] 00:18:01.423 15:05:17 -- target/invalid.sh@77 -- # out='2024/04/18 15:05:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode21450], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:18:01.423 request: 00:18:01.423 { 00:18:01.423 "method": "nvmf_create_subsystem", 00:18:01.423 "params": { 00:18:01.423 "nqn": "nqn.2016-06.io.spdk:cnode21450", 00:18:01.423 "max_cntlid": 0 00:18:01.423 } 00:18:01.423 } 00:18:01.423 Got JSON-RPC error response 00:18:01.423 GoRPCClient: error on JSON-RPC call' 00:18:01.423 15:05:17 -- target/invalid.sh@78 -- # [[ 2024/04/18 15:05:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode21450], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:18:01.423 request: 00:18:01.423 { 00:18:01.423 "method": "nvmf_create_subsystem", 00:18:01.423 "params": { 00:18:01.423 "nqn": "nqn.2016-06.io.spdk:cnode21450", 00:18:01.423 "max_cntlid": 0 00:18:01.423 } 00:18:01.423 } 00:18:01.423 Got JSON-RPC error response 00:18:01.423 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:01.423 15:05:17 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18070 -I 65520 00:18:01.683 [2024-04-18 15:05:17.168208] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18070: invalid cntlid range [1-65520] 00:18:01.683 15:05:17 -- target/invalid.sh@79 -- # out='2024/04/18 15:05:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode18070], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:18:01.683 request: 00:18:01.683 { 00:18:01.683 "method": "nvmf_create_subsystem", 00:18:01.683 "params": { 00:18:01.683 "nqn": "nqn.2016-06.io.spdk:cnode18070", 00:18:01.683 "max_cntlid": 65520 00:18:01.683 } 00:18:01.683 } 00:18:01.683 Got JSON-RPC error response 00:18:01.683 GoRPCClient: error on JSON-RPC call' 00:18:01.683 15:05:17 -- target/invalid.sh@80 -- # [[ 2024/04/18 15:05:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode18070], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:18:01.683 request: 00:18:01.683 { 00:18:01.683 "method": "nvmf_create_subsystem", 00:18:01.683 "params": { 00:18:01.683 "nqn": "nqn.2016-06.io.spdk:cnode18070", 00:18:01.683 "max_cntlid": 65520 00:18:01.683 } 00:18:01.683 } 00:18:01.683 Got JSON-RPC error response 00:18:01.683 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:01.683 15:05:17 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23601 -i 6 -I 5 00:18:01.683 [2024-04-18 15:05:17.380127] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23601: invalid cntlid range [6-5] 00:18:01.942 15:05:17 -- target/invalid.sh@83 -- # out='2024/04/18 15:05:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode23601], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:18:01.942 request: 00:18:01.942 { 00:18:01.942 "method": "nvmf_create_subsystem", 00:18:01.942 "params": { 00:18:01.942 "nqn": "nqn.2016-06.io.spdk:cnode23601", 00:18:01.942 "min_cntlid": 6, 00:18:01.942 "max_cntlid": 5 00:18:01.942 } 00:18:01.942 } 00:18:01.942 Got JSON-RPC error response 00:18:01.942 GoRPCClient: error on JSON-RPC call' 00:18:01.942 15:05:17 -- target/invalid.sh@84 -- # [[ 2024/04/18 15:05:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode23601], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:18:01.942 request: 00:18:01.942 { 00:18:01.942 "method": "nvmf_create_subsystem", 00:18:01.942 "params": { 00:18:01.942 "nqn": "nqn.2016-06.io.spdk:cnode23601", 00:18:01.942 "min_cntlid": 6, 00:18:01.942 "max_cntlid": 5 00:18:01.942 } 00:18:01.942 } 00:18:01.942 Got JSON-RPC error response 00:18:01.942 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:01.942 15:05:17 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:01.942 15:05:17 -- target/invalid.sh@87 -- # out='request: 00:18:01.942 { 00:18:01.942 "name": "foobar", 00:18:01.942 "method": "nvmf_delete_target", 00:18:01.942 "req_id": 1 00:18:01.942 } 00:18:01.942 Got JSON-RPC error response 00:18:01.942 response: 00:18:01.942 { 00:18:01.942 "code": -32602, 00:18:01.942 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:01.942 }' 00:18:01.942 15:05:17 -- target/invalid.sh@88 -- # [[ request: 00:18:01.942 { 00:18:01.942 "name": "foobar", 00:18:01.942 "method": "nvmf_delete_target", 00:18:01.942 "req_id": 1 00:18:01.942 } 00:18:01.942 Got JSON-RPC error response 00:18:01.942 response: 00:18:01.942 { 00:18:01.942 "code": -32602, 00:18:01.942 "message": "The specified target doesn't exist, cannot delete it." 00:18:01.942 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:01.942 15:05:17 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:01.942 15:05:17 -- target/invalid.sh@91 -- # nvmftestfini 00:18:01.942 15:05:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:01.942 15:05:17 -- nvmf/common.sh@117 -- # sync 00:18:01.942 15:05:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:01.942 15:05:17 -- nvmf/common.sh@120 -- # set +e 00:18:01.942 15:05:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:01.942 15:05:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:01.942 rmmod nvme_tcp 00:18:01.942 rmmod nvme_fabrics 00:18:01.942 rmmod nvme_keyring 00:18:01.942 15:05:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:01.942 15:05:17 -- nvmf/common.sh@124 -- # set -e 00:18:01.942 15:05:17 -- nvmf/common.sh@125 -- # return 0 00:18:01.942 15:05:17 -- nvmf/common.sh@478 -- # '[' -n 67730 ']' 00:18:01.942 15:05:17 -- nvmf/common.sh@479 -- # killprocess 67730 00:18:01.942 15:05:17 -- common/autotest_common.sh@936 -- # '[' -z 67730 ']' 00:18:01.942 15:05:17 -- common/autotest_common.sh@940 -- # kill -0 67730 00:18:01.942 15:05:17 -- common/autotest_common.sh@941 -- # uname 00:18:01.942 15:05:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.942 15:05:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67730 00:18:01.942 15:05:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:01.942 15:05:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:01.942 15:05:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67730' 00:18:01.942 killing process with pid 67730 00:18:01.942 15:05:17 -- common/autotest_common.sh@955 -- # kill 67730 00:18:01.942 15:05:17 -- common/autotest_common.sh@960 -- # wait 67730 00:18:02.201 15:05:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:02.201 15:05:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:02.201 15:05:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:02.201 15:05:17 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:02.201 15:05:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:02.201 15:05:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.201 15:05:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.201 15:05:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.201 15:05:17 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:02.460 00:18:02.460 real 0m5.235s 00:18:02.460 user 0m19.385s 00:18:02.460 sys 0m1.642s 00:18:02.460 15:05:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:02.460 15:05:17 -- common/autotest_common.sh@10 -- # set +x 00:18:02.460 ************************************ 00:18:02.460 END TEST nvmf_invalid 00:18:02.460 ************************************ 00:18:02.460 15:05:17 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:18:02.460 15:05:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:02.460 15:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:02.460 15:05:17 -- common/autotest_common.sh@10 -- # set +x 00:18:02.460 ************************************ 00:18:02.460 START TEST nvmf_abort 00:18:02.460 ************************************ 00:18:02.460 15:05:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:18:02.719 * Looking for test storage... 00:18:02.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:02.719 15:05:18 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:02.719 15:05:18 -- nvmf/common.sh@7 -- # uname -s 00:18:02.719 15:05:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.719 15:05:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.719 15:05:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.719 15:05:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.719 15:05:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.719 15:05:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.719 15:05:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.719 15:05:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.719 15:05:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.719 15:05:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.720 15:05:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:18:02.720 15:05:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:18:02.720 15:05:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.720 15:05:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.720 15:05:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:02.720 15:05:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.720 15:05:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:02.720 15:05:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.720 15:05:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.720 15:05:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.720 15:05:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.720 15:05:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.720 15:05:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.720 15:05:18 -- paths/export.sh@5 -- # export PATH 00:18:02.720 15:05:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.720 15:05:18 -- nvmf/common.sh@47 -- # : 0 00:18:02.720 15:05:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:02.720 15:05:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:02.720 15:05:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.720 15:05:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.720 15:05:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.720 15:05:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:02.720 15:05:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:02.720 15:05:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:02.720 15:05:18 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:02.720 15:05:18 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:18:02.720 15:05:18 -- target/abort.sh@14 -- # nvmftestinit 00:18:02.720 15:05:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:02.720 15:05:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.720 15:05:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:02.720 15:05:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:02.720 15:05:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:02.720 15:05:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.720 15:05:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.720 15:05:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.720 15:05:18 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:02.720 15:05:18 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:02.720 15:05:18 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:02.720 15:05:18 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:02.720 15:05:18 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:02.720 15:05:18 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:02.720 15:05:18 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.720 15:05:18 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.720 15:05:18 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:02.720 15:05:18 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:02.720 15:05:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:02.720 15:05:18 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:02.720 15:05:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:02.720 15:05:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.720 15:05:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:02.720 15:05:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:02.720 15:05:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:02.720 15:05:18 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:02.720 15:05:18 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:02.720 15:05:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:02.720 Cannot find device "nvmf_tgt_br" 00:18:02.720 15:05:18 -- nvmf/common.sh@155 -- # true 00:18:02.720 15:05:18 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:02.720 Cannot find device "nvmf_tgt_br2" 00:18:02.720 15:05:18 -- nvmf/common.sh@156 -- # true 00:18:02.720 15:05:18 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:02.720 15:05:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:02.720 Cannot find device "nvmf_tgt_br" 00:18:02.720 15:05:18 -- nvmf/common.sh@158 -- # true 00:18:02.720 15:05:18 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:02.720 Cannot find device "nvmf_tgt_br2" 00:18:02.720 15:05:18 -- nvmf/common.sh@159 -- # true 00:18:02.720 15:05:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:02.720 15:05:18 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:02.720 15:05:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:02.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.720 15:05:18 -- nvmf/common.sh@162 -- # true 00:18:02.720 15:05:18 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:02.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.720 15:05:18 -- nvmf/common.sh@163 -- # true 00:18:02.720 15:05:18 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:02.720 15:05:18 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:02.980 15:05:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:02.980 15:05:18 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:02.980 15:05:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:02.980 15:05:18 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:02.980 15:05:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:02.980 15:05:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:02.980 15:05:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:02.980 15:05:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:02.980 15:05:18 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:02.980 15:05:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:02.980 15:05:18 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:02.980 15:05:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:02.980 15:05:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:02.980 15:05:18 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:02.980 15:05:18 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:02.980 15:05:18 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:02.980 15:05:18 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:02.980 15:05:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:02.980 15:05:18 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:02.980 15:05:18 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:02.980 15:05:18 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:02.980 15:05:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:02.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:18:02.980 00:18:02.980 --- 10.0.0.2 ping statistics --- 00:18:02.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.980 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:02.980 15:05:18 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:02.980 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:02.980 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:18:02.980 00:18:02.980 --- 10.0.0.3 ping statistics --- 00:18:02.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.980 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:02.980 15:05:18 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:02.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:18:02.980 00:18:02.980 --- 10.0.0.1 ping statistics --- 00:18:02.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.980 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:18:02.980 15:05:18 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.980 15:05:18 -- nvmf/common.sh@422 -- # return 0 00:18:02.980 15:05:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:02.980 15:05:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.980 15:05:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:02.980 15:05:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:02.980 15:05:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.980 15:05:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:02.980 15:05:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:02.980 15:05:18 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:18:02.980 15:05:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:02.980 15:05:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:02.980 15:05:18 -- common/autotest_common.sh@10 -- # set +x 00:18:02.980 15:05:18 -- nvmf/common.sh@470 -- # nvmfpid=68234 00:18:02.980 15:05:18 -- nvmf/common.sh@471 -- # waitforlisten 68234 00:18:02.980 15:05:18 -- common/autotest_common.sh@817 -- # '[' -z 68234 ']' 00:18:02.980 15:05:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.980 15:05:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:02.980 15:05:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.980 15:05:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:02.980 15:05:18 -- common/autotest_common.sh@10 -- # set +x 00:18:02.980 15:05:18 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:03.240 [2024-04-18 15:05:18.691997] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:18:03.240 [2024-04-18 15:05:18.692063] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.240 [2024-04-18 15:05:18.833978] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:03.240 [2024-04-18 15:05:18.911407] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.240 [2024-04-18 15:05:18.911487] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.240 [2024-04-18 15:05:18.911498] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.240 [2024-04-18 15:05:18.911507] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.240 [2024-04-18 15:05:18.911514] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.240 [2024-04-18 15:05:18.911734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.240 [2024-04-18 15:05:18.912646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.240 [2024-04-18 15:05:18.912646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:04.177 15:05:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:04.177 15:05:19 -- common/autotest_common.sh@850 -- # return 0 00:18:04.177 15:05:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:04.177 15:05:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:04.177 15:05:19 -- common/autotest_common.sh@10 -- # set +x 00:18:04.177 15:05:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.177 15:05:19 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:18:04.177 15:05:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:04.177 15:05:19 -- common/autotest_common.sh@10 -- # set +x 00:18:04.177 [2024-04-18 15:05:19.616679] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.177 15:05:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:04.177 15:05:19 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:18:04.177 15:05:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:04.177 15:05:19 -- common/autotest_common.sh@10 -- # set +x 00:18:04.177 Malloc0 00:18:04.177 15:05:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:04.177 15:05:19 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:04.177 15:05:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:04.177 15:05:19 -- common/autotest_common.sh@10 -- # set +x 00:18:04.177 Delay0 00:18:04.177 15:05:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:04.177 15:05:19 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:04.177 15:05:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:04.177 15:05:19 -- common/autotest_common.sh@10 -- # set +x 00:18:04.177 15:05:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:04.177 15:05:19 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:18:04.177 15:05:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:04.177 15:05:19 -- common/autotest_common.sh@10 -- # set +x 00:18:04.177 15:05:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:04.177 15:05:19 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:04.177 15:05:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:04.177 15:05:19 -- common/autotest_common.sh@10 -- # set +x 00:18:04.177 [2024-04-18 15:05:19.697420] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.177 15:05:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:04.177 15:05:19 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:04.177 15:05:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:04.177 15:05:19 -- common/autotest_common.sh@10 -- # set +x 00:18:04.177 15:05:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:04.177 15:05:19 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:18:04.436 [2024-04-18 15:05:19.894819] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:06.341 Initializing NVMe Controllers 00:18:06.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:06.341 controller IO queue size 128 less than required 00:18:06.341 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:18:06.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:18:06.341 Initialization complete. Launching workers. 00:18:06.341 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 43053 00:18:06.341 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 43114, failed to submit 62 00:18:06.341 success 43057, unsuccess 57, failed 0 00:18:06.341 15:05:21 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:06.341 15:05:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.341 15:05:21 -- common/autotest_common.sh@10 -- # set +x 00:18:06.341 15:05:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.341 15:05:21 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:06.341 15:05:21 -- target/abort.sh@38 -- # nvmftestfini 00:18:06.341 15:05:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:06.341 15:05:21 -- nvmf/common.sh@117 -- # sync 00:18:06.341 15:05:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.341 15:05:21 -- nvmf/common.sh@120 -- # set +e 00:18:06.341 15:05:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.341 15:05:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.341 rmmod nvme_tcp 00:18:06.341 rmmod nvme_fabrics 00:18:06.341 rmmod nvme_keyring 00:18:06.341 15:05:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.341 15:05:22 -- nvmf/common.sh@124 -- # set -e 00:18:06.341 15:05:22 -- nvmf/common.sh@125 -- # return 0 00:18:06.341 15:05:22 -- nvmf/common.sh@478 -- # '[' -n 68234 ']' 00:18:06.341 15:05:22 -- nvmf/common.sh@479 -- # killprocess 68234 00:18:06.341 15:05:22 -- common/autotest_common.sh@936 -- # '[' -z 68234 ']' 00:18:06.341 15:05:22 -- common/autotest_common.sh@940 -- # kill -0 68234 00:18:06.341 15:05:22 -- common/autotest_common.sh@941 -- # uname 00:18:06.341 15:05:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:06.600 15:05:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68234 00:18:06.600 15:05:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:06.600 15:05:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:06.600 15:05:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68234' 00:18:06.600 killing process with pid 68234 00:18:06.600 15:05:22 -- common/autotest_common.sh@955 -- # kill 68234 00:18:06.600 15:05:22 -- common/autotest_common.sh@960 -- # wait 68234 00:18:06.886 15:05:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:06.886 15:05:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:06.886 15:05:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:06.886 15:05:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.886 15:05:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.886 15:05:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.886 15:05:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.886 15:05:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.886 15:05:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:06.886 00:18:06.886 real 0m4.309s 00:18:06.886 user 0m11.947s 00:18:06.886 sys 0m1.201s 00:18:06.886 15:05:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:06.886 15:05:22 -- common/autotest_common.sh@10 -- # set +x 00:18:06.886 ************************************ 00:18:06.886 END TEST nvmf_abort 00:18:06.886 ************************************ 00:18:06.886 15:05:22 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:18:06.886 15:05:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:06.886 15:05:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:06.886 15:05:22 -- common/autotest_common.sh@10 -- # set +x 00:18:06.886 ************************************ 00:18:06.886 START TEST nvmf_ns_hotplug_stress 00:18:06.886 ************************************ 00:18:06.886 15:05:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:18:07.146 * Looking for test storage... 00:18:07.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:07.146 15:05:22 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:07.146 15:05:22 -- nvmf/common.sh@7 -- # uname -s 00:18:07.146 15:05:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.146 15:05:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.146 15:05:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.146 15:05:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.146 15:05:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.146 15:05:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.146 15:05:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.146 15:05:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.146 15:05:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.146 15:05:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.146 15:05:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:18:07.146 15:05:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:18:07.146 15:05:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.146 15:05:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.146 15:05:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:07.146 15:05:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.146 15:05:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:07.146 15:05:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.146 15:05:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.146 15:05:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.146 15:05:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.146 15:05:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.147 15:05:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.147 15:05:22 -- paths/export.sh@5 -- # export PATH 00:18:07.147 15:05:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.147 15:05:22 -- nvmf/common.sh@47 -- # : 0 00:18:07.147 15:05:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:07.147 15:05:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:07.147 15:05:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.147 15:05:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.147 15:05:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.147 15:05:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:07.147 15:05:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:07.147 15:05:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:07.147 15:05:22 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.147 15:05:22 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:18:07.147 15:05:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:07.147 15:05:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.147 15:05:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:07.147 15:05:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:07.147 15:05:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:07.147 15:05:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.147 15:05:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.147 15:05:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.147 15:05:22 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:07.147 15:05:22 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:07.147 15:05:22 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:07.147 15:05:22 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:07.147 15:05:22 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:07.147 15:05:22 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:07.147 15:05:22 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.147 15:05:22 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.147 15:05:22 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:07.147 15:05:22 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:07.147 15:05:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:07.147 15:05:22 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:07.147 15:05:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:07.147 15:05:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.147 15:05:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:07.147 15:05:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:07.147 15:05:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:07.147 15:05:22 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:07.147 15:05:22 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:07.147 15:05:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:07.147 Cannot find device "nvmf_tgt_br" 00:18:07.147 15:05:22 -- nvmf/common.sh@155 -- # true 00:18:07.147 15:05:22 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:07.147 Cannot find device "nvmf_tgt_br2" 00:18:07.147 15:05:22 -- nvmf/common.sh@156 -- # true 00:18:07.147 15:05:22 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:07.147 15:05:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:07.147 Cannot find device "nvmf_tgt_br" 00:18:07.147 15:05:22 -- nvmf/common.sh@158 -- # true 00:18:07.147 15:05:22 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:07.147 Cannot find device "nvmf_tgt_br2" 00:18:07.147 15:05:22 -- nvmf/common.sh@159 -- # true 00:18:07.147 15:05:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:07.406 15:05:22 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:07.406 15:05:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:07.406 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.406 15:05:22 -- nvmf/common.sh@162 -- # true 00:18:07.406 15:05:22 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:07.406 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.406 15:05:22 -- nvmf/common.sh@163 -- # true 00:18:07.406 15:05:22 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:07.406 15:05:22 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:07.406 15:05:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:07.406 15:05:22 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:07.406 15:05:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:07.406 15:05:22 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:07.406 15:05:22 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:07.406 15:05:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:07.406 15:05:22 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:07.406 15:05:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:07.406 15:05:22 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:07.406 15:05:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:07.406 15:05:22 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:07.406 15:05:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:07.406 15:05:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:07.406 15:05:22 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:07.406 15:05:23 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:07.406 15:05:23 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:07.406 15:05:23 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:07.406 15:05:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:07.406 15:05:23 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:07.406 15:05:23 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:07.406 15:05:23 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:07.406 15:05:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:07.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:18:07.406 00:18:07.406 --- 10.0.0.2 ping statistics --- 00:18:07.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.406 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:18:07.406 15:05:23 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:07.406 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:07.406 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:18:07.406 00:18:07.406 --- 10.0.0.3 ping statistics --- 00:18:07.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.406 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:07.406 15:05:23 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:07.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:18:07.406 00:18:07.406 --- 10.0.0.1 ping statistics --- 00:18:07.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.406 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:18:07.406 15:05:23 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.406 15:05:23 -- nvmf/common.sh@422 -- # return 0 00:18:07.406 15:05:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:07.406 15:05:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.406 15:05:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:07.406 15:05:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:07.406 15:05:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.406 15:05:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:07.406 15:05:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:07.406 15:05:23 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:18:07.406 15:05:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:07.406 15:05:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:07.406 15:05:23 -- common/autotest_common.sh@10 -- # set +x 00:18:07.406 15:05:23 -- nvmf/common.sh@470 -- # nvmfpid=68498 00:18:07.406 15:05:23 -- nvmf/common.sh@471 -- # waitforlisten 68498 00:18:07.406 15:05:23 -- common/autotest_common.sh@817 -- # '[' -z 68498 ']' 00:18:07.406 15:05:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.406 15:05:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:07.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.406 15:05:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.406 15:05:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:07.406 15:05:23 -- common/autotest_common.sh@10 -- # set +x 00:18:07.406 15:05:23 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:07.676 [2024-04-18 15:05:23.158057] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:18:07.676 [2024-04-18 15:05:23.158136] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.676 [2024-04-18 15:05:23.300775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:07.676 [2024-04-18 15:05:23.377598] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.676 [2024-04-18 15:05:23.377648] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.676 [2024-04-18 15:05:23.377658] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.676 [2024-04-18 15:05:23.377667] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.676 [2024-04-18 15:05:23.377675] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.937 [2024-04-18 15:05:23.377895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.937 [2024-04-18 15:05:23.378718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.937 [2024-04-18 15:05:23.378717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:08.505 15:05:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:08.505 15:05:24 -- common/autotest_common.sh@850 -- # return 0 00:18:08.505 15:05:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:08.505 15:05:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:08.505 15:05:24 -- common/autotest_common.sh@10 -- # set +x 00:18:08.505 15:05:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.505 15:05:24 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:18:08.505 15:05:24 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:08.765 [2024-04-18 15:05:24.327689] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.765 15:05:24 -- target/ns_hotplug_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:09.023 15:05:24 -- target/ns_hotplug_stress.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.282 [2024-04-18 15:05:24.729089] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.282 15:05:24 -- target/ns_hotplug_stress.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:09.282 15:05:24 -- target/ns_hotplug_stress.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:18:09.541 Malloc0 00:18:09.541 15:05:25 -- target/ns_hotplug_stress.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:09.800 Delay0 00:18:09.800 15:05:25 -- target/ns_hotplug_stress.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:10.058 15:05:25 -- target/ns_hotplug_stress.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:18:10.058 NULL1 00:18:10.058 15:05:25 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:10.317 15:05:25 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=68635 00:18:10.317 15:05:25 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:18:10.317 15:05:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:10.317 15:05:25 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:10.576 15:05:26 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:10.835 15:05:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:18:10.835 15:05:26 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:18:10.835 true 00:18:11.094 15:05:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:11.094 15:05:26 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:11.094 15:05:26 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:11.353 15:05:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:18:11.353 15:05:26 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:18:11.613 true 00:18:11.613 15:05:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:11.613 15:05:27 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:12.548 Read completed with error (sct=0, sc=11) 00:18:12.549 15:05:28 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:12.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:12.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:12.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:12.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:12.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:12.808 15:05:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:18:12.808 15:05:28 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:18:13.066 true 00:18:13.066 15:05:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:13.066 15:05:28 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:14.000 15:05:29 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:14.000 15:05:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:18:14.000 15:05:29 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:18:14.259 true 00:18:14.259 15:05:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:14.259 15:05:29 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:14.518 15:05:29 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:14.518 15:05:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:18:14.518 15:05:30 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:18:14.777 true 00:18:14.777 15:05:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:14.777 15:05:30 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:15.773 15:05:31 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:16.031 15:05:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:18:16.031 15:05:31 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:18:16.288 true 00:18:16.288 15:05:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:16.288 15:05:31 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:16.288 15:05:31 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:16.546 15:05:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:18:16.546 15:05:32 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:18:16.805 true 00:18:16.805 15:05:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:16.805 15:05:32 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:17.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:17.741 15:05:33 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:17.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:17.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:17.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:17.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:17.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:17.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:17.999 15:05:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:18:17.999 15:05:33 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:18:18.256 true 00:18:18.256 15:05:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:18.256 15:05:33 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:19.192 15:05:34 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:19.192 15:05:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:18:19.192 15:05:34 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:18:19.450 true 00:18:19.450 15:05:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:19.450 15:05:35 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:19.708 15:05:35 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:19.968 15:05:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:18:19.968 15:05:35 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:18:19.968 true 00:18:19.968 15:05:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:19.968 15:05:35 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:21.345 15:05:36 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:21.345 15:05:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:18:21.345 15:05:36 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:18:21.604 true 00:18:21.604 15:05:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:21.604 15:05:37 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:21.604 15:05:37 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:21.862 15:05:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:18:21.862 15:05:37 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:18:22.120 true 00:18:22.120 15:05:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:22.120 15:05:37 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:23.056 15:05:38 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:23.315 15:05:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:18:23.315 15:05:38 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:18:23.573 true 00:18:23.573 15:05:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:23.573 15:05:39 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:23.573 15:05:39 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:23.831 15:05:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:18:23.831 15:05:39 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:18:24.106 true 00:18:24.106 15:05:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:24.106 15:05:39 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:25.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:25.041 15:05:40 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:25.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:25.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:25.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:25.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:25.300 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:25.300 15:05:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:18:25.300 15:05:40 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:18:25.559 true 00:18:25.559 15:05:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:25.559 15:05:41 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:26.494 15:05:41 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:26.494 15:05:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:18:26.495 15:05:42 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:18:26.754 true 00:18:26.754 15:05:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:26.754 15:05:42 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:27.012 15:05:42 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:27.270 15:05:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:18:27.270 15:05:42 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:18:27.270 true 00:18:27.270 15:05:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:27.270 15:05:42 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:28.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:28.320 15:05:43 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:28.320 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:28.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:28.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:28.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:28.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:28.579 15:05:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:18:28.579 15:05:44 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:18:28.837 true 00:18:28.837 15:05:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:28.837 15:05:44 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:29.773 15:05:45 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:29.773 15:05:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:18:29.773 15:05:45 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:18:30.032 true 00:18:30.032 15:05:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:30.032 15:05:45 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:30.291 15:05:45 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:30.549 15:05:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:18:30.550 15:05:46 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:18:30.550 true 00:18:30.550 15:05:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:30.550 15:05:46 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:31.926 15:05:47 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:31.926 15:05:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:18:31.926 15:05:47 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:18:31.926 true 00:18:32.185 15:05:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:32.185 15:05:47 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:32.185 15:05:47 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:32.442 15:05:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:18:32.442 15:05:48 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:18:32.700 true 00:18:32.700 15:05:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:32.700 15:05:48 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:33.638 15:05:49 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:33.898 15:05:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:18:33.898 15:05:49 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:18:34.157 true 00:18:34.157 15:05:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:34.157 15:05:49 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:34.157 15:05:49 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:34.501 15:05:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:18:34.501 15:05:50 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:18:34.763 true 00:18:34.763 15:05:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:34.764 15:05:50 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:35.700 15:05:51 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:35.958 15:05:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:18:35.958 15:05:51 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:18:35.958 true 00:18:35.958 15:05:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:35.958 15:05:51 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:36.216 15:05:51 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:36.474 15:05:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:18:36.474 15:05:52 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:18:36.732 true 00:18:36.732 15:05:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:36.732 15:05:52 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:37.667 15:05:53 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:37.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:37.925 15:05:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:18:37.925 15:05:53 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:18:38.184 true 00:18:38.184 15:05:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:38.184 15:05:53 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:38.442 15:05:53 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:38.442 15:05:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:18:38.442 15:05:54 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:18:38.701 true 00:18:38.701 15:05:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:38.701 15:05:54 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:39.636 15:05:55 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:39.895 15:05:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:18:39.895 15:05:55 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:18:40.153 true 00:18:40.153 15:05:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:40.153 15:05:55 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:40.412 15:05:55 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:40.412 15:05:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:18:40.412 15:05:56 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:18:40.671 Initializing NVMe Controllers 00:18:40.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:40.671 Controller IO queue size 128, less than required. 00:18:40.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:40.671 Controller IO queue size 128, less than required. 00:18:40.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:40.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:40.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:40.671 Initialization complete. Launching workers. 00:18:40.671 ======================================================== 00:18:40.671 Latency(us) 00:18:40.671 Device Information : IOPS MiB/s Average min max 00:18:40.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 909.44 0.44 82778.46 2646.87 1025058.31 00:18:40.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14723.34 7.19 8672.92 2279.77 514320.36 00:18:40.671 ======================================================== 00:18:40.671 Total : 15632.78 7.63 12984.03 2279.77 1025058.31 00:18:40.671 00:18:40.671 true 00:18:40.671 15:05:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68635 00:18:40.671 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (68635) - No such process 00:18:40.671 15:05:56 -- target/ns_hotplug_stress.sh@44 -- # wait 68635 00:18:40.671 15:05:56 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:40.671 15:05:56 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:18:40.671 15:05:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:40.671 15:05:56 -- nvmf/common.sh@117 -- # sync 00:18:40.671 15:05:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:40.671 15:05:56 -- nvmf/common.sh@120 -- # set +e 00:18:40.671 15:05:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:40.671 15:05:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:40.671 rmmod nvme_tcp 00:18:40.671 rmmod nvme_fabrics 00:18:40.671 rmmod nvme_keyring 00:18:40.930 15:05:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:40.930 15:05:56 -- nvmf/common.sh@124 -- # set -e 00:18:40.930 15:05:56 -- nvmf/common.sh@125 -- # return 0 00:18:40.930 15:05:56 -- nvmf/common.sh@478 -- # '[' -n 68498 ']' 00:18:40.930 15:05:56 -- nvmf/common.sh@479 -- # killprocess 68498 00:18:40.930 15:05:56 -- common/autotest_common.sh@936 -- # '[' -z 68498 ']' 00:18:40.930 15:05:56 -- common/autotest_common.sh@940 -- # kill -0 68498 00:18:40.930 15:05:56 -- common/autotest_common.sh@941 -- # uname 00:18:40.930 15:05:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:40.930 15:05:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68498 00:18:40.930 killing process with pid 68498 00:18:40.930 15:05:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:40.930 15:05:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:40.930 15:05:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68498' 00:18:40.930 15:05:56 -- common/autotest_common.sh@955 -- # kill 68498 00:18:40.930 15:05:56 -- common/autotest_common.sh@960 -- # wait 68498 00:18:41.189 15:05:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:41.189 15:05:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:41.189 15:05:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:41.189 15:05:56 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:41.189 15:05:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:41.189 15:05:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.189 15:05:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.189 15:05:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.189 15:05:56 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:41.189 00:18:41.189 real 0m34.194s 00:18:41.189 user 2m19.853s 00:18:41.189 sys 0m10.608s 00:18:41.189 15:05:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:41.189 15:05:56 -- common/autotest_common.sh@10 -- # set +x 00:18:41.189 ************************************ 00:18:41.189 END TEST nvmf_ns_hotplug_stress 00:18:41.189 ************************************ 00:18:41.189 15:05:56 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:41.189 15:05:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:41.189 15:05:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:41.189 15:05:56 -- common/autotest_common.sh@10 -- # set +x 00:18:41.189 ************************************ 00:18:41.189 START TEST nvmf_connect_stress 00:18:41.189 ************************************ 00:18:41.189 15:05:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:41.448 * Looking for test storage... 00:18:41.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:41.448 15:05:57 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:41.448 15:05:57 -- nvmf/common.sh@7 -- # uname -s 00:18:41.448 15:05:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.448 15:05:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.448 15:05:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.448 15:05:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.448 15:05:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.448 15:05:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.448 15:05:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.448 15:05:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.448 15:05:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.448 15:05:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.448 15:05:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:18:41.448 15:05:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:18:41.448 15:05:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.448 15:05:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.448 15:05:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:41.448 15:05:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.448 15:05:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:41.448 15:05:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.448 15:05:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.448 15:05:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.448 15:05:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.448 15:05:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.448 15:05:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.448 15:05:57 -- paths/export.sh@5 -- # export PATH 00:18:41.448 15:05:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.448 15:05:57 -- nvmf/common.sh@47 -- # : 0 00:18:41.448 15:05:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:41.448 15:05:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:41.448 15:05:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.448 15:05:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.448 15:05:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.448 15:05:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:41.448 15:05:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:41.448 15:05:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:41.448 15:05:57 -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:41.448 15:05:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:41.448 15:05:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.448 15:05:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:41.448 15:05:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:41.448 15:05:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:41.448 15:05:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.448 15:05:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.448 15:05:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.448 15:05:57 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:41.448 15:05:57 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:41.448 15:05:57 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:41.448 15:05:57 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:41.448 15:05:57 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:41.448 15:05:57 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:41.448 15:05:57 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.448 15:05:57 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.448 15:05:57 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:41.448 15:05:57 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:41.448 15:05:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:41.448 15:05:57 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:41.448 15:05:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:41.448 15:05:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.448 15:05:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:41.448 15:05:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:41.448 15:05:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:41.448 15:05:57 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:41.448 15:05:57 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:41.448 15:05:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:41.448 Cannot find device "nvmf_tgt_br" 00:18:41.448 15:05:57 -- nvmf/common.sh@155 -- # true 00:18:41.448 15:05:57 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:41.448 Cannot find device "nvmf_tgt_br2" 00:18:41.448 15:05:57 -- nvmf/common.sh@156 -- # true 00:18:41.448 15:05:57 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:41.448 15:05:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:41.448 Cannot find device "nvmf_tgt_br" 00:18:41.448 15:05:57 -- nvmf/common.sh@158 -- # true 00:18:41.448 15:05:57 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:41.448 Cannot find device "nvmf_tgt_br2" 00:18:41.448 15:05:57 -- nvmf/common.sh@159 -- # true 00:18:41.707 15:05:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:41.707 15:05:57 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:41.707 15:05:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:41.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.707 15:05:57 -- nvmf/common.sh@162 -- # true 00:18:41.707 15:05:57 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:41.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.707 15:05:57 -- nvmf/common.sh@163 -- # true 00:18:41.707 15:05:57 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:41.707 15:05:57 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:41.707 15:05:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:41.707 15:05:57 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:41.707 15:05:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:41.707 15:05:57 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:41.707 15:05:57 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:41.707 15:05:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:41.707 15:05:57 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:41.707 15:05:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:41.707 15:05:57 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:41.707 15:05:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:41.707 15:05:57 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:41.707 15:05:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:41.707 15:05:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:41.707 15:05:57 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:41.707 15:05:57 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:41.707 15:05:57 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:41.707 15:05:57 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:41.707 15:05:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:41.707 15:05:57 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:41.965 15:05:57 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:41.965 15:05:57 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:41.965 15:05:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:41.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:18:41.965 00:18:41.965 --- 10.0.0.2 ping statistics --- 00:18:41.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.965 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:18:41.965 15:05:57 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:41.965 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:41.965 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:18:41.965 00:18:41.965 --- 10.0.0.3 ping statistics --- 00:18:41.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.965 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:18:41.965 15:05:57 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:41.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:18:41.965 00:18:41.965 --- 10.0.0.1 ping statistics --- 00:18:41.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.965 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:41.965 15:05:57 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.965 15:05:57 -- nvmf/common.sh@422 -- # return 0 00:18:41.965 15:05:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:41.965 15:05:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.965 15:05:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:41.965 15:05:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:41.965 15:05:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.965 15:05:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:41.965 15:05:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:41.965 15:05:57 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:41.965 15:05:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:41.965 15:05:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:41.965 15:05:57 -- common/autotest_common.sh@10 -- # set +x 00:18:41.965 15:05:57 -- nvmf/common.sh@470 -- # nvmfpid=69782 00:18:41.965 15:05:57 -- nvmf/common.sh@471 -- # waitforlisten 69782 00:18:41.965 15:05:57 -- common/autotest_common.sh@817 -- # '[' -z 69782 ']' 00:18:41.965 15:05:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.965 15:05:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:41.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.965 15:05:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.965 15:05:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:41.965 15:05:57 -- common/autotest_common.sh@10 -- # set +x 00:18:41.965 15:05:57 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:41.965 [2024-04-18 15:05:57.541240] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:18:41.965 [2024-04-18 15:05:57.541340] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.225 [2024-04-18 15:05:57.684804] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:42.225 [2024-04-18 15:05:57.782742] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.225 [2024-04-18 15:05:57.782826] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.225 [2024-04-18 15:05:57.782838] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.225 [2024-04-18 15:05:57.782847] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.225 [2024-04-18 15:05:57.782856] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.225 [2024-04-18 15:05:57.783103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.225 [2024-04-18 15:05:57.783254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.225 [2024-04-18 15:05:57.783255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.792 15:05:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:42.792 15:05:58 -- common/autotest_common.sh@850 -- # return 0 00:18:42.792 15:05:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:42.792 15:05:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:42.792 15:05:58 -- common/autotest_common.sh@10 -- # set +x 00:18:42.792 15:05:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.792 15:05:58 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:42.792 15:05:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.792 15:05:58 -- common/autotest_common.sh@10 -- # set +x 00:18:42.792 [2024-04-18 15:05:58.464957] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.792 15:05:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.792 15:05:58 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:42.792 15:05:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.792 15:05:58 -- common/autotest_common.sh@10 -- # set +x 00:18:42.792 15:05:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.792 15:05:58 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:42.792 15:05:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.792 15:05:58 -- common/autotest_common.sh@10 -- # set +x 00:18:42.792 [2024-04-18 15:05:58.491063] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.792 15:05:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.792 15:05:58 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:42.792 15:05:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.051 15:05:58 -- common/autotest_common.sh@10 -- # set +x 00:18:43.051 NULL1 00:18:43.051 15:05:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.051 15:05:58 -- target/connect_stress.sh@21 -- # PERF_PID=69834 00:18:43.051 15:05:58 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:43.051 15:05:58 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:18:43.051 15:05:58 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # seq 1 20 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.051 15:05:58 -- target/connect_stress.sh@28 -- # cat 00:18:43.051 15:05:58 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:43.051 15:05:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.051 15:05:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.051 15:05:58 -- common/autotest_common.sh@10 -- # set +x 00:18:43.310 15:05:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.310 15:05:58 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:43.310 15:05:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.310 15:05:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.310 15:05:58 -- common/autotest_common.sh@10 -- # set +x 00:18:43.878 15:05:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.878 15:05:59 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:43.878 15:05:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.878 15:05:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.878 15:05:59 -- common/autotest_common.sh@10 -- # set +x 00:18:44.137 15:05:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:44.137 15:05:59 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:44.137 15:05:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.137 15:05:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:44.137 15:05:59 -- common/autotest_common.sh@10 -- # set +x 00:18:44.396 15:05:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:44.396 15:05:59 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:44.396 15:05:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.396 15:05:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:44.396 15:05:59 -- common/autotest_common.sh@10 -- # set +x 00:18:44.653 15:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:44.653 15:06:00 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:44.653 15:06:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.653 15:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:44.653 15:06:00 -- common/autotest_common.sh@10 -- # set +x 00:18:44.911 15:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:44.911 15:06:00 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:44.911 15:06:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.911 15:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:44.911 15:06:00 -- common/autotest_common.sh@10 -- # set +x 00:18:45.478 15:06:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.478 15:06:00 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:45.478 15:06:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.478 15:06:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.478 15:06:00 -- common/autotest_common.sh@10 -- # set +x 00:18:45.736 15:06:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.736 15:06:01 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:45.736 15:06:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.736 15:06:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.736 15:06:01 -- common/autotest_common.sh@10 -- # set +x 00:18:45.994 15:06:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.994 15:06:01 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:45.994 15:06:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.994 15:06:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.994 15:06:01 -- common/autotest_common.sh@10 -- # set +x 00:18:46.253 15:06:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.253 15:06:01 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:46.253 15:06:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:46.253 15:06:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.253 15:06:01 -- common/autotest_common.sh@10 -- # set +x 00:18:46.819 15:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.819 15:06:02 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:46.819 15:06:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:46.819 15:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.819 15:06:02 -- common/autotest_common.sh@10 -- # set +x 00:18:47.079 15:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.079 15:06:02 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:47.079 15:06:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.079 15:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.079 15:06:02 -- common/autotest_common.sh@10 -- # set +x 00:18:47.337 15:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.337 15:06:02 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:47.337 15:06:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.337 15:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.337 15:06:02 -- common/autotest_common.sh@10 -- # set +x 00:18:47.594 15:06:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.594 15:06:03 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:47.594 15:06:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.594 15:06:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.594 15:06:03 -- common/autotest_common.sh@10 -- # set +x 00:18:47.852 15:06:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.852 15:06:03 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:47.852 15:06:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.852 15:06:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.852 15:06:03 -- common/autotest_common.sh@10 -- # set +x 00:18:48.473 15:06:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.473 15:06:03 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:48.473 15:06:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:48.473 15:06:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.473 15:06:03 -- common/autotest_common.sh@10 -- # set +x 00:18:48.731 15:06:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.732 15:06:04 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:48.732 15:06:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:48.732 15:06:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.732 15:06:04 -- common/autotest_common.sh@10 -- # set +x 00:18:48.991 15:06:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.991 15:06:04 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:48.991 15:06:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:48.991 15:06:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.991 15:06:04 -- common/autotest_common.sh@10 -- # set +x 00:18:49.249 15:06:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.249 15:06:04 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:49.249 15:06:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.249 15:06:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.249 15:06:04 -- common/autotest_common.sh@10 -- # set +x 00:18:49.507 15:06:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.507 15:06:05 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:49.507 15:06:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.507 15:06:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.507 15:06:05 -- common/autotest_common.sh@10 -- # set +x 00:18:50.075 15:06:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.075 15:06:05 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:50.075 15:06:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.075 15:06:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.075 15:06:05 -- common/autotest_common.sh@10 -- # set +x 00:18:50.334 15:06:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.334 15:06:05 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:50.334 15:06:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.334 15:06:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.334 15:06:05 -- common/autotest_common.sh@10 -- # set +x 00:18:50.593 15:06:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.593 15:06:06 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:50.593 15:06:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.593 15:06:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.593 15:06:06 -- common/autotest_common.sh@10 -- # set +x 00:18:50.852 15:06:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.852 15:06:06 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:50.852 15:06:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.852 15:06:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.852 15:06:06 -- common/autotest_common.sh@10 -- # set +x 00:18:51.111 15:06:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.111 15:06:06 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:51.111 15:06:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.111 15:06:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.111 15:06:06 -- common/autotest_common.sh@10 -- # set +x 00:18:51.723 15:06:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.723 15:06:07 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:51.723 15:06:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.723 15:06:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.723 15:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:51.995 15:06:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.995 15:06:07 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:51.995 15:06:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.995 15:06:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.995 15:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:52.253 15:06:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.253 15:06:07 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:52.253 15:06:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.253 15:06:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.253 15:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:52.512 15:06:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.512 15:06:08 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:52.512 15:06:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.512 15:06:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.512 15:06:08 -- common/autotest_common.sh@10 -- # set +x 00:18:52.770 15:06:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.771 15:06:08 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:52.771 15:06:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.771 15:06:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.771 15:06:08 -- common/autotest_common.sh@10 -- # set +x 00:18:53.029 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:53.287 15:06:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.287 15:06:08 -- target/connect_stress.sh@34 -- # kill -0 69834 00:18:53.287 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (69834) - No such process 00:18:53.287 15:06:08 -- target/connect_stress.sh@38 -- # wait 69834 00:18:53.287 15:06:08 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:18:53.287 15:06:08 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:53.287 15:06:08 -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:53.287 15:06:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:53.287 15:06:08 -- nvmf/common.sh@117 -- # sync 00:18:53.287 15:06:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:53.287 15:06:08 -- nvmf/common.sh@120 -- # set +e 00:18:53.287 15:06:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:53.287 15:06:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:53.287 rmmod nvme_tcp 00:18:53.287 rmmod nvme_fabrics 00:18:53.287 rmmod nvme_keyring 00:18:53.287 15:06:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:53.287 15:06:08 -- nvmf/common.sh@124 -- # set -e 00:18:53.287 15:06:08 -- nvmf/common.sh@125 -- # return 0 00:18:53.287 15:06:08 -- nvmf/common.sh@478 -- # '[' -n 69782 ']' 00:18:53.287 15:06:08 -- nvmf/common.sh@479 -- # killprocess 69782 00:18:53.287 15:06:08 -- common/autotest_common.sh@936 -- # '[' -z 69782 ']' 00:18:53.287 15:06:08 -- common/autotest_common.sh@940 -- # kill -0 69782 00:18:53.287 15:06:08 -- common/autotest_common.sh@941 -- # uname 00:18:53.287 15:06:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:53.287 15:06:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69782 00:18:53.287 killing process with pid 69782 00:18:53.287 15:06:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:53.287 15:06:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:53.287 15:06:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69782' 00:18:53.287 15:06:08 -- common/autotest_common.sh@955 -- # kill 69782 00:18:53.287 15:06:08 -- common/autotest_common.sh@960 -- # wait 69782 00:18:53.546 15:06:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:53.546 15:06:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:53.546 15:06:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:53.546 15:06:09 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:53.546 15:06:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:53.546 15:06:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.546 15:06:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.546 15:06:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.546 15:06:09 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:53.546 00:18:53.546 real 0m12.293s 00:18:53.546 user 0m39.572s 00:18:53.546 sys 0m4.543s 00:18:53.546 15:06:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:53.546 ************************************ 00:18:53.546 END TEST nvmf_connect_stress 00:18:53.546 ************************************ 00:18:53.546 15:06:09 -- common/autotest_common.sh@10 -- # set +x 00:18:53.546 15:06:09 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:53.546 15:06:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:53.546 15:06:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:53.546 15:06:09 -- common/autotest_common.sh@10 -- # set +x 00:18:53.805 ************************************ 00:18:53.805 START TEST nvmf_fused_ordering 00:18:53.805 ************************************ 00:18:53.805 15:06:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:53.805 * Looking for test storage... 00:18:53.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:53.805 15:06:09 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:53.805 15:06:09 -- nvmf/common.sh@7 -- # uname -s 00:18:53.805 15:06:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.805 15:06:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.805 15:06:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.805 15:06:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.805 15:06:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.805 15:06:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.805 15:06:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.805 15:06:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.805 15:06:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.805 15:06:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.805 15:06:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:18:53.805 15:06:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:18:53.805 15:06:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.805 15:06:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.805 15:06:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:53.805 15:06:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.805 15:06:09 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:53.805 15:06:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.805 15:06:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.805 15:06:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.805 15:06:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.064 15:06:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.064 15:06:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.064 15:06:09 -- paths/export.sh@5 -- # export PATH 00:18:54.064 15:06:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.064 15:06:09 -- nvmf/common.sh@47 -- # : 0 00:18:54.064 15:06:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:54.064 15:06:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:54.064 15:06:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.064 15:06:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.064 15:06:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.064 15:06:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:54.064 15:06:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:54.064 15:06:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:54.064 15:06:09 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:54.064 15:06:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:54.064 15:06:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.064 15:06:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:54.064 15:06:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:54.064 15:06:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:54.064 15:06:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.064 15:06:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.064 15:06:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.064 15:06:09 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:54.064 15:06:09 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:54.064 15:06:09 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:54.064 15:06:09 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:54.064 15:06:09 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:54.064 15:06:09 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:54.064 15:06:09 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.064 15:06:09 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.064 15:06:09 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:54.064 15:06:09 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:54.064 15:06:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:54.064 15:06:09 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:54.064 15:06:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:54.064 15:06:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.064 15:06:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:54.064 15:06:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:54.064 15:06:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:54.064 15:06:09 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:54.064 15:06:09 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:54.064 15:06:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:54.064 Cannot find device "nvmf_tgt_br" 00:18:54.064 15:06:09 -- nvmf/common.sh@155 -- # true 00:18:54.064 15:06:09 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:54.064 Cannot find device "nvmf_tgt_br2" 00:18:54.064 15:06:09 -- nvmf/common.sh@156 -- # true 00:18:54.064 15:06:09 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:54.064 15:06:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:54.064 Cannot find device "nvmf_tgt_br" 00:18:54.064 15:06:09 -- nvmf/common.sh@158 -- # true 00:18:54.064 15:06:09 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:54.064 Cannot find device "nvmf_tgt_br2" 00:18:54.064 15:06:09 -- nvmf/common.sh@159 -- # true 00:18:54.064 15:06:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:54.064 15:06:09 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:54.064 15:06:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:54.064 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:54.064 15:06:09 -- nvmf/common.sh@162 -- # true 00:18:54.064 15:06:09 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:54.064 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:54.064 15:06:09 -- nvmf/common.sh@163 -- # true 00:18:54.064 15:06:09 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:54.064 15:06:09 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:54.064 15:06:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:54.064 15:06:09 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:54.064 15:06:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:54.322 15:06:09 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:54.322 15:06:09 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:54.322 15:06:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:54.322 15:06:09 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:54.322 15:06:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:54.322 15:06:09 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:54.322 15:06:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:54.322 15:06:09 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:54.322 15:06:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:54.322 15:06:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:54.322 15:06:09 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:54.322 15:06:09 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:54.322 15:06:09 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:54.322 15:06:09 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:54.323 15:06:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:54.323 15:06:09 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:54.323 15:06:09 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:54.323 15:06:09 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:54.323 15:06:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:54.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:18:54.323 00:18:54.323 --- 10.0.0.2 ping statistics --- 00:18:54.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.323 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:18:54.323 15:06:09 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:54.323 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:54.323 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:18:54.323 00:18:54.323 --- 10.0.0.3 ping statistics --- 00:18:54.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.323 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:18:54.323 15:06:09 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:54.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:18:54.323 00:18:54.323 --- 10.0.0.1 ping statistics --- 00:18:54.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.323 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:18:54.323 15:06:09 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.323 15:06:09 -- nvmf/common.sh@422 -- # return 0 00:18:54.323 15:06:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:54.323 15:06:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.323 15:06:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:54.323 15:06:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:54.323 15:06:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.323 15:06:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:54.323 15:06:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:54.323 15:06:09 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:54.323 15:06:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:54.323 15:06:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:54.323 15:06:09 -- common/autotest_common.sh@10 -- # set +x 00:18:54.323 15:06:09 -- nvmf/common.sh@470 -- # nvmfpid=70161 00:18:54.323 15:06:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:54.323 15:06:09 -- nvmf/common.sh@471 -- # waitforlisten 70161 00:18:54.323 15:06:09 -- common/autotest_common.sh@817 -- # '[' -z 70161 ']' 00:18:54.323 15:06:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.323 15:06:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:54.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.323 15:06:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.323 15:06:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:54.323 15:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:54.581 [2024-04-18 15:06:10.059227] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:18:54.581 [2024-04-18 15:06:10.059324] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.581 [2024-04-18 15:06:10.200832] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.838 [2024-04-18 15:06:10.299031] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.838 [2024-04-18 15:06:10.299093] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.838 [2024-04-18 15:06:10.299104] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.838 [2024-04-18 15:06:10.299113] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.838 [2024-04-18 15:06:10.299121] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.838 [2024-04-18 15:06:10.299155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.436 15:06:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:55.436 15:06:10 -- common/autotest_common.sh@850 -- # return 0 00:18:55.436 15:06:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:55.436 15:06:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:55.436 15:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:55.436 15:06:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.436 15:06:10 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:55.436 15:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.437 15:06:10 -- common/autotest_common.sh@10 -- # set +x 00:18:55.437 [2024-04-18 15:06:11.012557] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.437 15:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.437 15:06:11 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:55.437 15:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.437 15:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.437 15:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.437 15:06:11 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.437 15:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.437 15:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.437 [2024-04-18 15:06:11.036706] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.437 15:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.437 15:06:11 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:55.437 15:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.437 15:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.437 NULL1 00:18:55.437 15:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.437 15:06:11 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:55.437 15:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.437 15:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.437 15:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.437 15:06:11 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:55.437 15:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.437 15:06:11 -- common/autotest_common.sh@10 -- # set +x 00:18:55.437 15:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.437 15:06:11 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:55.437 [2024-04-18 15:06:11.108188] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:18:55.437 [2024-04-18 15:06:11.108238] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70211 ] 00:18:56.013 Attached to nqn.2016-06.io.spdk:cnode1 00:18:56.013 Namespace ID: 1 size: 1GB 00:18:56.013 fused_ordering(0) 00:18:56.013 fused_ordering(1) 00:18:56.013 fused_ordering(2) 00:18:56.013 fused_ordering(3) 00:18:56.013 fused_ordering(4) 00:18:56.013 fused_ordering(5) 00:18:56.013 fused_ordering(6) 00:18:56.013 fused_ordering(7) 00:18:56.013 fused_ordering(8) 00:18:56.013 fused_ordering(9) 00:18:56.013 fused_ordering(10) 00:18:56.013 fused_ordering(11) 00:18:56.013 fused_ordering(12) 00:18:56.013 fused_ordering(13) 00:18:56.013 fused_ordering(14) 00:18:56.013 fused_ordering(15) 00:18:56.013 fused_ordering(16) 00:18:56.013 fused_ordering(17) 00:18:56.013 fused_ordering(18) 00:18:56.013 fused_ordering(19) 00:18:56.013 fused_ordering(20) 00:18:56.013 fused_ordering(21) 00:18:56.013 fused_ordering(22) 00:18:56.013 fused_ordering(23) 00:18:56.013 fused_ordering(24) 00:18:56.013 fused_ordering(25) 00:18:56.013 fused_ordering(26) 00:18:56.013 fused_ordering(27) 00:18:56.013 fused_ordering(28) 00:18:56.013 fused_ordering(29) 00:18:56.013 fused_ordering(30) 00:18:56.013 fused_ordering(31) 00:18:56.013 fused_ordering(32) 00:18:56.013 fused_ordering(33) 00:18:56.013 fused_ordering(34) 00:18:56.013 fused_ordering(35) 00:18:56.013 fused_ordering(36) 00:18:56.013 fused_ordering(37) 00:18:56.013 fused_ordering(38) 00:18:56.013 fused_ordering(39) 00:18:56.013 fused_ordering(40) 00:18:56.013 fused_ordering(41) 00:18:56.013 fused_ordering(42) 00:18:56.013 fused_ordering(43) 00:18:56.013 fused_ordering(44) 00:18:56.013 fused_ordering(45) 00:18:56.013 fused_ordering(46) 00:18:56.013 fused_ordering(47) 00:18:56.013 fused_ordering(48) 00:18:56.013 fused_ordering(49) 00:18:56.013 fused_ordering(50) 00:18:56.013 fused_ordering(51) 00:18:56.013 fused_ordering(52) 00:18:56.013 fused_ordering(53) 00:18:56.013 fused_ordering(54) 00:18:56.013 fused_ordering(55) 00:18:56.013 fused_ordering(56) 00:18:56.013 fused_ordering(57) 00:18:56.013 fused_ordering(58) 00:18:56.013 fused_ordering(59) 00:18:56.013 fused_ordering(60) 00:18:56.013 fused_ordering(61) 00:18:56.013 fused_ordering(62) 00:18:56.013 fused_ordering(63) 00:18:56.013 fused_ordering(64) 00:18:56.013 fused_ordering(65) 00:18:56.013 fused_ordering(66) 00:18:56.013 fused_ordering(67) 00:18:56.013 fused_ordering(68) 00:18:56.013 fused_ordering(69) 00:18:56.013 fused_ordering(70) 00:18:56.013 fused_ordering(71) 00:18:56.013 fused_ordering(72) 00:18:56.013 fused_ordering(73) 00:18:56.013 fused_ordering(74) 00:18:56.013 fused_ordering(75) 00:18:56.013 fused_ordering(76) 00:18:56.013 fused_ordering(77) 00:18:56.013 fused_ordering(78) 00:18:56.013 fused_ordering(79) 00:18:56.013 fused_ordering(80) 00:18:56.013 fused_ordering(81) 00:18:56.013 fused_ordering(82) 00:18:56.013 fused_ordering(83) 00:18:56.013 fused_ordering(84) 00:18:56.013 fused_ordering(85) 00:18:56.013 fused_ordering(86) 00:18:56.013 fused_ordering(87) 00:18:56.013 fused_ordering(88) 00:18:56.013 fused_ordering(89) 00:18:56.013 fused_ordering(90) 00:18:56.013 fused_ordering(91) 00:18:56.013 fused_ordering(92) 00:18:56.013 fused_ordering(93) 00:18:56.013 fused_ordering(94) 00:18:56.013 fused_ordering(95) 00:18:56.013 fused_ordering(96) 00:18:56.013 fused_ordering(97) 00:18:56.013 fused_ordering(98) 00:18:56.013 fused_ordering(99) 00:18:56.013 fused_ordering(100) 00:18:56.013 fused_ordering(101) 00:18:56.013 fused_ordering(102) 00:18:56.013 fused_ordering(103) 00:18:56.013 fused_ordering(104) 00:18:56.013 fused_ordering(105) 00:18:56.013 fused_ordering(106) 00:18:56.013 fused_ordering(107) 00:18:56.013 fused_ordering(108) 00:18:56.013 fused_ordering(109) 00:18:56.013 fused_ordering(110) 00:18:56.013 fused_ordering(111) 00:18:56.013 fused_ordering(112) 00:18:56.013 fused_ordering(113) 00:18:56.013 fused_ordering(114) 00:18:56.013 fused_ordering(115) 00:18:56.013 fused_ordering(116) 00:18:56.013 fused_ordering(117) 00:18:56.013 fused_ordering(118) 00:18:56.013 fused_ordering(119) 00:18:56.013 fused_ordering(120) 00:18:56.013 fused_ordering(121) 00:18:56.013 fused_ordering(122) 00:18:56.013 fused_ordering(123) 00:18:56.013 fused_ordering(124) 00:18:56.013 fused_ordering(125) 00:18:56.013 fused_ordering(126) 00:18:56.013 fused_ordering(127) 00:18:56.013 fused_ordering(128) 00:18:56.013 fused_ordering(129) 00:18:56.013 fused_ordering(130) 00:18:56.013 fused_ordering(131) 00:18:56.013 fused_ordering(132) 00:18:56.013 fused_ordering(133) 00:18:56.013 fused_ordering(134) 00:18:56.013 fused_ordering(135) 00:18:56.013 fused_ordering(136) 00:18:56.013 fused_ordering(137) 00:18:56.013 fused_ordering(138) 00:18:56.013 fused_ordering(139) 00:18:56.013 fused_ordering(140) 00:18:56.013 fused_ordering(141) 00:18:56.013 fused_ordering(142) 00:18:56.013 fused_ordering(143) 00:18:56.013 fused_ordering(144) 00:18:56.013 fused_ordering(145) 00:18:56.013 fused_ordering(146) 00:18:56.013 fused_ordering(147) 00:18:56.013 fused_ordering(148) 00:18:56.013 fused_ordering(149) 00:18:56.013 fused_ordering(150) 00:18:56.013 fused_ordering(151) 00:18:56.013 fused_ordering(152) 00:18:56.013 fused_ordering(153) 00:18:56.013 fused_ordering(154) 00:18:56.013 fused_ordering(155) 00:18:56.013 fused_ordering(156) 00:18:56.013 fused_ordering(157) 00:18:56.013 fused_ordering(158) 00:18:56.013 fused_ordering(159) 00:18:56.013 fused_ordering(160) 00:18:56.013 fused_ordering(161) 00:18:56.013 fused_ordering(162) 00:18:56.013 fused_ordering(163) 00:18:56.013 fused_ordering(164) 00:18:56.013 fused_ordering(165) 00:18:56.013 fused_ordering(166) 00:18:56.013 fused_ordering(167) 00:18:56.013 fused_ordering(168) 00:18:56.013 fused_ordering(169) 00:18:56.013 fused_ordering(170) 00:18:56.013 fused_ordering(171) 00:18:56.013 fused_ordering(172) 00:18:56.013 fused_ordering(173) 00:18:56.013 fused_ordering(174) 00:18:56.013 fused_ordering(175) 00:18:56.013 fused_ordering(176) 00:18:56.013 fused_ordering(177) 00:18:56.013 fused_ordering(178) 00:18:56.013 fused_ordering(179) 00:18:56.013 fused_ordering(180) 00:18:56.013 fused_ordering(181) 00:18:56.013 fused_ordering(182) 00:18:56.013 fused_ordering(183) 00:18:56.013 fused_ordering(184) 00:18:56.013 fused_ordering(185) 00:18:56.013 fused_ordering(186) 00:18:56.013 fused_ordering(187) 00:18:56.013 fused_ordering(188) 00:18:56.013 fused_ordering(189) 00:18:56.013 fused_ordering(190) 00:18:56.013 fused_ordering(191) 00:18:56.013 fused_ordering(192) 00:18:56.013 fused_ordering(193) 00:18:56.013 fused_ordering(194) 00:18:56.013 fused_ordering(195) 00:18:56.013 fused_ordering(196) 00:18:56.013 fused_ordering(197) 00:18:56.013 fused_ordering(198) 00:18:56.013 fused_ordering(199) 00:18:56.013 fused_ordering(200) 00:18:56.013 fused_ordering(201) 00:18:56.013 fused_ordering(202) 00:18:56.013 fused_ordering(203) 00:18:56.013 fused_ordering(204) 00:18:56.013 fused_ordering(205) 00:18:56.013 fused_ordering(206) 00:18:56.013 fused_ordering(207) 00:18:56.013 fused_ordering(208) 00:18:56.013 fused_ordering(209) 00:18:56.013 fused_ordering(210) 00:18:56.013 fused_ordering(211) 00:18:56.014 fused_ordering(212) 00:18:56.014 fused_ordering(213) 00:18:56.014 fused_ordering(214) 00:18:56.014 fused_ordering(215) 00:18:56.014 fused_ordering(216) 00:18:56.014 fused_ordering(217) 00:18:56.014 fused_ordering(218) 00:18:56.014 fused_ordering(219) 00:18:56.014 fused_ordering(220) 00:18:56.014 fused_ordering(221) 00:18:56.014 fused_ordering(222) 00:18:56.014 fused_ordering(223) 00:18:56.014 fused_ordering(224) 00:18:56.014 fused_ordering(225) 00:18:56.014 fused_ordering(226) 00:18:56.014 fused_ordering(227) 00:18:56.014 fused_ordering(228) 00:18:56.014 fused_ordering(229) 00:18:56.014 fused_ordering(230) 00:18:56.014 fused_ordering(231) 00:18:56.014 fused_ordering(232) 00:18:56.014 fused_ordering(233) 00:18:56.014 fused_ordering(234) 00:18:56.014 fused_ordering(235) 00:18:56.014 fused_ordering(236) 00:18:56.014 fused_ordering(237) 00:18:56.014 fused_ordering(238) 00:18:56.014 fused_ordering(239) 00:18:56.014 fused_ordering(240) 00:18:56.014 fused_ordering(241) 00:18:56.014 fused_ordering(242) 00:18:56.014 fused_ordering(243) 00:18:56.014 fused_ordering(244) 00:18:56.014 fused_ordering(245) 00:18:56.014 fused_ordering(246) 00:18:56.014 fused_ordering(247) 00:18:56.014 fused_ordering(248) 00:18:56.014 fused_ordering(249) 00:18:56.014 fused_ordering(250) 00:18:56.014 fused_ordering(251) 00:18:56.014 fused_ordering(252) 00:18:56.014 fused_ordering(253) 00:18:56.014 fused_ordering(254) 00:18:56.014 fused_ordering(255) 00:18:56.014 fused_ordering(256) 00:18:56.014 fused_ordering(257) 00:18:56.014 fused_ordering(258) 00:18:56.014 fused_ordering(259) 00:18:56.014 fused_ordering(260) 00:18:56.014 fused_ordering(261) 00:18:56.014 fused_ordering(262) 00:18:56.014 fused_ordering(263) 00:18:56.014 fused_ordering(264) 00:18:56.014 fused_ordering(265) 00:18:56.014 fused_ordering(266) 00:18:56.014 fused_ordering(267) 00:18:56.014 fused_ordering(268) 00:18:56.014 fused_ordering(269) 00:18:56.014 fused_ordering(270) 00:18:56.014 fused_ordering(271) 00:18:56.014 fused_ordering(272) 00:18:56.014 fused_ordering(273) 00:18:56.014 fused_ordering(274) 00:18:56.014 fused_ordering(275) 00:18:56.014 fused_ordering(276) 00:18:56.014 fused_ordering(277) 00:18:56.014 fused_ordering(278) 00:18:56.014 fused_ordering(279) 00:18:56.014 fused_ordering(280) 00:18:56.014 fused_ordering(281) 00:18:56.014 fused_ordering(282) 00:18:56.014 fused_ordering(283) 00:18:56.014 fused_ordering(284) 00:18:56.014 fused_ordering(285) 00:18:56.014 fused_ordering(286) 00:18:56.014 fused_ordering(287) 00:18:56.014 fused_ordering(288) 00:18:56.014 fused_ordering(289) 00:18:56.014 fused_ordering(290) 00:18:56.014 fused_ordering(291) 00:18:56.014 fused_ordering(292) 00:18:56.014 fused_ordering(293) 00:18:56.014 fused_ordering(294) 00:18:56.014 fused_ordering(295) 00:18:56.014 fused_ordering(296) 00:18:56.014 fused_ordering(297) 00:18:56.014 fused_ordering(298) 00:18:56.014 fused_ordering(299) 00:18:56.014 fused_ordering(300) 00:18:56.014 fused_ordering(301) 00:18:56.014 fused_ordering(302) 00:18:56.014 fused_ordering(303) 00:18:56.014 fused_ordering(304) 00:18:56.014 fused_ordering(305) 00:18:56.014 fused_ordering(306) 00:18:56.014 fused_ordering(307) 00:18:56.014 fused_ordering(308) 00:18:56.014 fused_ordering(309) 00:18:56.014 fused_ordering(310) 00:18:56.014 fused_ordering(311) 00:18:56.014 fused_ordering(312) 00:18:56.014 fused_ordering(313) 00:18:56.014 fused_ordering(314) 00:18:56.014 fused_ordering(315) 00:18:56.014 fused_ordering(316) 00:18:56.014 fused_ordering(317) 00:18:56.014 fused_ordering(318) 00:18:56.014 fused_ordering(319) 00:18:56.014 fused_ordering(320) 00:18:56.014 fused_ordering(321) 00:18:56.014 fused_ordering(322) 00:18:56.014 fused_ordering(323) 00:18:56.014 fused_ordering(324) 00:18:56.014 fused_ordering(325) 00:18:56.014 fused_ordering(326) 00:18:56.014 fused_ordering(327) 00:18:56.014 fused_ordering(328) 00:18:56.014 fused_ordering(329) 00:18:56.014 fused_ordering(330) 00:18:56.014 fused_ordering(331) 00:18:56.014 fused_ordering(332) 00:18:56.014 fused_ordering(333) 00:18:56.014 fused_ordering(334) 00:18:56.014 fused_ordering(335) 00:18:56.014 fused_ordering(336) 00:18:56.014 fused_ordering(337) 00:18:56.014 fused_ordering(338) 00:18:56.014 fused_ordering(339) 00:18:56.014 fused_ordering(340) 00:18:56.014 fused_ordering(341) 00:18:56.014 fused_ordering(342) 00:18:56.014 fused_ordering(343) 00:18:56.014 fused_ordering(344) 00:18:56.014 fused_ordering(345) 00:18:56.014 fused_ordering(346) 00:18:56.014 fused_ordering(347) 00:18:56.014 fused_ordering(348) 00:18:56.014 fused_ordering(349) 00:18:56.014 fused_ordering(350) 00:18:56.014 fused_ordering(351) 00:18:56.014 fused_ordering(352) 00:18:56.014 fused_ordering(353) 00:18:56.014 fused_ordering(354) 00:18:56.014 fused_ordering(355) 00:18:56.014 fused_ordering(356) 00:18:56.014 fused_ordering(357) 00:18:56.014 fused_ordering(358) 00:18:56.014 fused_ordering(359) 00:18:56.014 fused_ordering(360) 00:18:56.014 fused_ordering(361) 00:18:56.014 fused_ordering(362) 00:18:56.014 fused_ordering(363) 00:18:56.014 fused_ordering(364) 00:18:56.014 fused_ordering(365) 00:18:56.014 fused_ordering(366) 00:18:56.014 fused_ordering(367) 00:18:56.014 fused_ordering(368) 00:18:56.014 fused_ordering(369) 00:18:56.014 fused_ordering(370) 00:18:56.014 fused_ordering(371) 00:18:56.014 fused_ordering(372) 00:18:56.014 fused_ordering(373) 00:18:56.014 fused_ordering(374) 00:18:56.014 fused_ordering(375) 00:18:56.014 fused_ordering(376) 00:18:56.014 fused_ordering(377) 00:18:56.014 fused_ordering(378) 00:18:56.014 fused_ordering(379) 00:18:56.014 fused_ordering(380) 00:18:56.014 fused_ordering(381) 00:18:56.014 fused_ordering(382) 00:18:56.014 fused_ordering(383) 00:18:56.014 fused_ordering(384) 00:18:56.014 fused_ordering(385) 00:18:56.014 fused_ordering(386) 00:18:56.014 fused_ordering(387) 00:18:56.014 fused_ordering(388) 00:18:56.014 fused_ordering(389) 00:18:56.014 fused_ordering(390) 00:18:56.014 fused_ordering(391) 00:18:56.014 fused_ordering(392) 00:18:56.014 fused_ordering(393) 00:18:56.014 fused_ordering(394) 00:18:56.014 fused_ordering(395) 00:18:56.014 fused_ordering(396) 00:18:56.014 fused_ordering(397) 00:18:56.014 fused_ordering(398) 00:18:56.014 fused_ordering(399) 00:18:56.014 fused_ordering(400) 00:18:56.014 fused_ordering(401) 00:18:56.014 fused_ordering(402) 00:18:56.014 fused_ordering(403) 00:18:56.014 fused_ordering(404) 00:18:56.014 fused_ordering(405) 00:18:56.014 fused_ordering(406) 00:18:56.014 fused_ordering(407) 00:18:56.014 fused_ordering(408) 00:18:56.014 fused_ordering(409) 00:18:56.014 fused_ordering(410) 00:18:56.580 fused_ordering(411) 00:18:56.580 fused_ordering(412) 00:18:56.580 fused_ordering(413) 00:18:56.580 fused_ordering(414) 00:18:56.580 fused_ordering(415) 00:18:56.580 fused_ordering(416) 00:18:56.580 fused_ordering(417) 00:18:56.580 fused_ordering(418) 00:18:56.580 fused_ordering(419) 00:18:56.580 fused_ordering(420) 00:18:56.580 fused_ordering(421) 00:18:56.580 fused_ordering(422) 00:18:56.580 fused_ordering(423) 00:18:56.580 fused_ordering(424) 00:18:56.580 fused_ordering(425) 00:18:56.580 fused_ordering(426) 00:18:56.580 fused_ordering(427) 00:18:56.580 fused_ordering(428) 00:18:56.580 fused_ordering(429) 00:18:56.580 fused_ordering(430) 00:18:56.580 fused_ordering(431) 00:18:56.580 fused_ordering(432) 00:18:56.580 fused_ordering(433) 00:18:56.580 fused_ordering(434) 00:18:56.580 fused_ordering(435) 00:18:56.580 fused_ordering(436) 00:18:56.580 fused_ordering(437) 00:18:56.580 fused_ordering(438) 00:18:56.580 fused_ordering(439) 00:18:56.580 fused_ordering(440) 00:18:56.580 fused_ordering(441) 00:18:56.580 fused_ordering(442) 00:18:56.580 fused_ordering(443) 00:18:56.580 fused_ordering(444) 00:18:56.580 fused_ordering(445) 00:18:56.580 fused_ordering(446) 00:18:56.580 fused_ordering(447) 00:18:56.580 fused_ordering(448) 00:18:56.580 fused_ordering(449) 00:18:56.580 fused_ordering(450) 00:18:56.580 fused_ordering(451) 00:18:56.580 fused_ordering(452) 00:18:56.580 fused_ordering(453) 00:18:56.580 fused_ordering(454) 00:18:56.580 fused_ordering(455) 00:18:56.580 fused_ordering(456) 00:18:56.580 fused_ordering(457) 00:18:56.580 fused_ordering(458) 00:18:56.580 fused_ordering(459) 00:18:56.580 fused_ordering(460) 00:18:56.580 fused_ordering(461) 00:18:56.580 fused_ordering(462) 00:18:56.580 fused_ordering(463) 00:18:56.580 fused_ordering(464) 00:18:56.580 fused_ordering(465) 00:18:56.580 fused_ordering(466) 00:18:56.580 fused_ordering(467) 00:18:56.580 fused_ordering(468) 00:18:56.580 fused_ordering(469) 00:18:56.580 fused_ordering(470) 00:18:56.580 fused_ordering(471) 00:18:56.580 fused_ordering(472) 00:18:56.580 fused_ordering(473) 00:18:56.580 fused_ordering(474) 00:18:56.580 fused_ordering(475) 00:18:56.580 fused_ordering(476) 00:18:56.580 fused_ordering(477) 00:18:56.580 fused_ordering(478) 00:18:56.580 fused_ordering(479) 00:18:56.580 fused_ordering(480) 00:18:56.580 fused_ordering(481) 00:18:56.580 fused_ordering(482) 00:18:56.580 fused_ordering(483) 00:18:56.580 fused_ordering(484) 00:18:56.580 fused_ordering(485) 00:18:56.580 fused_ordering(486) 00:18:56.580 fused_ordering(487) 00:18:56.580 fused_ordering(488) 00:18:56.580 fused_ordering(489) 00:18:56.580 fused_ordering(490) 00:18:56.580 fused_ordering(491) 00:18:56.580 fused_ordering(492) 00:18:56.580 fused_ordering(493) 00:18:56.580 fused_ordering(494) 00:18:56.580 fused_ordering(495) 00:18:56.580 fused_ordering(496) 00:18:56.580 fused_ordering(497) 00:18:56.580 fused_ordering(498) 00:18:56.580 fused_ordering(499) 00:18:56.580 fused_ordering(500) 00:18:56.580 fused_ordering(501) 00:18:56.580 fused_ordering(502) 00:18:56.580 fused_ordering(503) 00:18:56.580 fused_ordering(504) 00:18:56.580 fused_ordering(505) 00:18:56.580 fused_ordering(506) 00:18:56.580 fused_ordering(507) 00:18:56.580 fused_ordering(508) 00:18:56.580 fused_ordering(509) 00:18:56.580 fused_ordering(510) 00:18:56.580 fused_ordering(511) 00:18:56.580 fused_ordering(512) 00:18:56.580 fused_ordering(513) 00:18:56.580 fused_ordering(514) 00:18:56.580 fused_ordering(515) 00:18:56.580 fused_ordering(516) 00:18:56.580 fused_ordering(517) 00:18:56.580 fused_ordering(518) 00:18:56.580 fused_ordering(519) 00:18:56.580 fused_ordering(520) 00:18:56.580 fused_ordering(521) 00:18:56.580 fused_ordering(522) 00:18:56.580 fused_ordering(523) 00:18:56.580 fused_ordering(524) 00:18:56.580 fused_ordering(525) 00:18:56.580 fused_ordering(526) 00:18:56.580 fused_ordering(527) 00:18:56.580 fused_ordering(528) 00:18:56.580 fused_ordering(529) 00:18:56.580 fused_ordering(530) 00:18:56.580 fused_ordering(531) 00:18:56.580 fused_ordering(532) 00:18:56.580 fused_ordering(533) 00:18:56.580 fused_ordering(534) 00:18:56.580 fused_ordering(535) 00:18:56.580 fused_ordering(536) 00:18:56.580 fused_ordering(537) 00:18:56.580 fused_ordering(538) 00:18:56.580 fused_ordering(539) 00:18:56.580 fused_ordering(540) 00:18:56.580 fused_ordering(541) 00:18:56.580 fused_ordering(542) 00:18:56.580 fused_ordering(543) 00:18:56.580 fused_ordering(544) 00:18:56.580 fused_ordering(545) 00:18:56.580 fused_ordering(546) 00:18:56.580 fused_ordering(547) 00:18:56.580 fused_ordering(548) 00:18:56.580 fused_ordering(549) 00:18:56.580 fused_ordering(550) 00:18:56.580 fused_ordering(551) 00:18:56.580 fused_ordering(552) 00:18:56.580 fused_ordering(553) 00:18:56.580 fused_ordering(554) 00:18:56.580 fused_ordering(555) 00:18:56.580 fused_ordering(556) 00:18:56.580 fused_ordering(557) 00:18:56.580 fused_ordering(558) 00:18:56.580 fused_ordering(559) 00:18:56.580 fused_ordering(560) 00:18:56.580 fused_ordering(561) 00:18:56.580 fused_ordering(562) 00:18:56.580 fused_ordering(563) 00:18:56.580 fused_ordering(564) 00:18:56.581 fused_ordering(565) 00:18:56.581 fused_ordering(566) 00:18:56.581 fused_ordering(567) 00:18:56.581 fused_ordering(568) 00:18:56.581 fused_ordering(569) 00:18:56.581 fused_ordering(570) 00:18:56.581 fused_ordering(571) 00:18:56.581 fused_ordering(572) 00:18:56.581 fused_ordering(573) 00:18:56.581 fused_ordering(574) 00:18:56.581 fused_ordering(575) 00:18:56.581 fused_ordering(576) 00:18:56.581 fused_ordering(577) 00:18:56.581 fused_ordering(578) 00:18:56.581 fused_ordering(579) 00:18:56.581 fused_ordering(580) 00:18:56.581 fused_ordering(581) 00:18:56.581 fused_ordering(582) 00:18:56.581 fused_ordering(583) 00:18:56.581 fused_ordering(584) 00:18:56.581 fused_ordering(585) 00:18:56.581 fused_ordering(586) 00:18:56.581 fused_ordering(587) 00:18:56.581 fused_ordering(588) 00:18:56.581 fused_ordering(589) 00:18:56.581 fused_ordering(590) 00:18:56.581 fused_ordering(591) 00:18:56.581 fused_ordering(592) 00:18:56.581 fused_ordering(593) 00:18:56.581 fused_ordering(594) 00:18:56.581 fused_ordering(595) 00:18:56.581 fused_ordering(596) 00:18:56.581 fused_ordering(597) 00:18:56.581 fused_ordering(598) 00:18:56.581 fused_ordering(599) 00:18:56.581 fused_ordering(600) 00:18:56.581 fused_ordering(601) 00:18:56.581 fused_ordering(602) 00:18:56.581 fused_ordering(603) 00:18:56.581 fused_ordering(604) 00:18:56.581 fused_ordering(605) 00:18:56.581 fused_ordering(606) 00:18:56.581 fused_ordering(607) 00:18:56.581 fused_ordering(608) 00:18:56.581 fused_ordering(609) 00:18:56.581 fused_ordering(610) 00:18:56.581 fused_ordering(611) 00:18:56.581 fused_ordering(612) 00:18:56.581 fused_ordering(613) 00:18:56.581 fused_ordering(614) 00:18:56.581 fused_ordering(615) 00:18:56.839 fused_ordering(616) 00:18:56.839 fused_ordering(617) 00:18:56.839 fused_ordering(618) 00:18:56.839 fused_ordering(619) 00:18:56.839 fused_ordering(620) 00:18:56.839 fused_ordering(621) 00:18:56.839 fused_ordering(622) 00:18:56.839 fused_ordering(623) 00:18:56.839 fused_ordering(624) 00:18:56.839 fused_ordering(625) 00:18:56.839 fused_ordering(626) 00:18:56.839 fused_ordering(627) 00:18:56.839 fused_ordering(628) 00:18:56.839 fused_ordering(629) 00:18:56.839 fused_ordering(630) 00:18:56.839 fused_ordering(631) 00:18:56.839 fused_ordering(632) 00:18:56.839 fused_ordering(633) 00:18:56.839 fused_ordering(634) 00:18:56.839 fused_ordering(635) 00:18:56.839 fused_ordering(636) 00:18:56.839 fused_ordering(637) 00:18:56.839 fused_ordering(638) 00:18:56.839 fused_ordering(639) 00:18:56.839 fused_ordering(640) 00:18:56.839 fused_ordering(641) 00:18:56.839 fused_ordering(642) 00:18:56.839 fused_ordering(643) 00:18:56.839 fused_ordering(644) 00:18:56.839 fused_ordering(645) 00:18:56.839 fused_ordering(646) 00:18:56.839 fused_ordering(647) 00:18:56.839 fused_ordering(648) 00:18:56.839 fused_ordering(649) 00:18:56.839 fused_ordering(650) 00:18:56.839 fused_ordering(651) 00:18:56.839 fused_ordering(652) 00:18:56.839 fused_ordering(653) 00:18:56.839 fused_ordering(654) 00:18:56.839 fused_ordering(655) 00:18:56.839 fused_ordering(656) 00:18:56.839 fused_ordering(657) 00:18:56.839 fused_ordering(658) 00:18:56.839 fused_ordering(659) 00:18:56.839 fused_ordering(660) 00:18:56.839 fused_ordering(661) 00:18:56.839 fused_ordering(662) 00:18:56.839 fused_ordering(663) 00:18:56.839 fused_ordering(664) 00:18:56.839 fused_ordering(665) 00:18:56.839 fused_ordering(666) 00:18:56.839 fused_ordering(667) 00:18:56.839 fused_ordering(668) 00:18:56.839 fused_ordering(669) 00:18:56.839 fused_ordering(670) 00:18:56.839 fused_ordering(671) 00:18:56.839 fused_ordering(672) 00:18:56.839 fused_ordering(673) 00:18:56.839 fused_ordering(674) 00:18:56.839 fused_ordering(675) 00:18:56.839 fused_ordering(676) 00:18:56.839 fused_ordering(677) 00:18:56.839 fused_ordering(678) 00:18:56.839 fused_ordering(679) 00:18:56.839 fused_ordering(680) 00:18:56.839 fused_ordering(681) 00:18:56.839 fused_ordering(682) 00:18:56.839 fused_ordering(683) 00:18:56.839 fused_ordering(684) 00:18:56.839 fused_ordering(685) 00:18:56.839 fused_ordering(686) 00:18:56.839 fused_ordering(687) 00:18:56.839 fused_ordering(688) 00:18:56.839 fused_ordering(689) 00:18:56.839 fused_ordering(690) 00:18:56.839 fused_ordering(691) 00:18:56.839 fused_ordering(692) 00:18:56.839 fused_ordering(693) 00:18:56.839 fused_ordering(694) 00:18:56.839 fused_ordering(695) 00:18:56.839 fused_ordering(696) 00:18:56.839 fused_ordering(697) 00:18:56.839 fused_ordering(698) 00:18:56.839 fused_ordering(699) 00:18:56.839 fused_ordering(700) 00:18:56.839 fused_ordering(701) 00:18:56.839 fused_ordering(702) 00:18:56.839 fused_ordering(703) 00:18:56.839 fused_ordering(704) 00:18:56.839 fused_ordering(705) 00:18:56.839 fused_ordering(706) 00:18:56.839 fused_ordering(707) 00:18:56.839 fused_ordering(708) 00:18:56.839 fused_ordering(709) 00:18:56.839 fused_ordering(710) 00:18:56.839 fused_ordering(711) 00:18:56.839 fused_ordering(712) 00:18:56.839 fused_ordering(713) 00:18:56.839 fused_ordering(714) 00:18:56.839 fused_ordering(715) 00:18:56.839 fused_ordering(716) 00:18:56.839 fused_ordering(717) 00:18:56.839 fused_ordering(718) 00:18:56.839 fused_ordering(719) 00:18:56.839 fused_ordering(720) 00:18:56.839 fused_ordering(721) 00:18:56.839 fused_ordering(722) 00:18:56.839 fused_ordering(723) 00:18:56.839 fused_ordering(724) 00:18:56.839 fused_ordering(725) 00:18:56.839 fused_ordering(726) 00:18:56.839 fused_ordering(727) 00:18:56.839 fused_ordering(728) 00:18:56.839 fused_ordering(729) 00:18:56.839 fused_ordering(730) 00:18:56.839 fused_ordering(731) 00:18:56.839 fused_ordering(732) 00:18:56.839 fused_ordering(733) 00:18:56.839 fused_ordering(734) 00:18:56.839 fused_ordering(735) 00:18:56.839 fused_ordering(736) 00:18:56.839 fused_ordering(737) 00:18:56.839 fused_ordering(738) 00:18:56.839 fused_ordering(739) 00:18:56.839 fused_ordering(740) 00:18:56.839 fused_ordering(741) 00:18:56.839 fused_ordering(742) 00:18:56.839 fused_ordering(743) 00:18:56.839 fused_ordering(744) 00:18:56.839 fused_ordering(745) 00:18:56.839 fused_ordering(746) 00:18:56.839 fused_ordering(747) 00:18:56.839 fused_ordering(748) 00:18:56.839 fused_ordering(749) 00:18:56.839 fused_ordering(750) 00:18:56.839 fused_ordering(751) 00:18:56.840 fused_ordering(752) 00:18:56.840 fused_ordering(753) 00:18:56.840 fused_ordering(754) 00:18:56.840 fused_ordering(755) 00:18:56.840 fused_ordering(756) 00:18:56.840 fused_ordering(757) 00:18:56.840 fused_ordering(758) 00:18:56.840 fused_ordering(759) 00:18:56.840 fused_ordering(760) 00:18:56.840 fused_ordering(761) 00:18:56.840 fused_ordering(762) 00:18:56.840 fused_ordering(763) 00:18:56.840 fused_ordering(764) 00:18:56.840 fused_ordering(765) 00:18:56.840 fused_ordering(766) 00:18:56.840 fused_ordering(767) 00:18:56.840 fused_ordering(768) 00:18:56.840 fused_ordering(769) 00:18:56.840 fused_ordering(770) 00:18:56.840 fused_ordering(771) 00:18:56.840 fused_ordering(772) 00:18:56.840 fused_ordering(773) 00:18:56.840 fused_ordering(774) 00:18:56.840 fused_ordering(775) 00:18:56.840 fused_ordering(776) 00:18:56.840 fused_ordering(777) 00:18:56.840 fused_ordering(778) 00:18:56.840 fused_ordering(779) 00:18:56.840 fused_ordering(780) 00:18:56.840 fused_ordering(781) 00:18:56.840 fused_ordering(782) 00:18:56.840 fused_ordering(783) 00:18:56.840 fused_ordering(784) 00:18:56.840 fused_ordering(785) 00:18:56.840 fused_ordering(786) 00:18:56.840 fused_ordering(787) 00:18:56.840 fused_ordering(788) 00:18:56.840 fused_ordering(789) 00:18:56.840 fused_ordering(790) 00:18:56.840 fused_ordering(791) 00:18:56.840 fused_ordering(792) 00:18:56.840 fused_ordering(793) 00:18:56.840 fused_ordering(794) 00:18:56.840 fused_ordering(795) 00:18:56.840 fused_ordering(796) 00:18:56.840 fused_ordering(797) 00:18:56.840 fused_ordering(798) 00:18:56.840 fused_ordering(799) 00:18:56.840 fused_ordering(800) 00:18:56.840 fused_ordering(801) 00:18:56.840 fused_ordering(802) 00:18:56.840 fused_ordering(803) 00:18:56.840 fused_ordering(804) 00:18:56.840 fused_ordering(805) 00:18:56.840 fused_ordering(806) 00:18:56.840 fused_ordering(807) 00:18:56.840 fused_ordering(808) 00:18:56.840 fused_ordering(809) 00:18:56.840 fused_ordering(810) 00:18:56.840 fused_ordering(811) 00:18:56.840 fused_ordering(812) 00:18:56.840 fused_ordering(813) 00:18:56.840 fused_ordering(814) 00:18:56.840 fused_ordering(815) 00:18:56.840 fused_ordering(816) 00:18:56.840 fused_ordering(817) 00:18:56.840 fused_ordering(818) 00:18:56.840 fused_ordering(819) 00:18:56.840 fused_ordering(820) 00:18:57.405 fused_ordering(821) 00:18:57.405 fused_ordering(822) 00:18:57.405 fused_ordering(823) 00:18:57.405 fused_ordering(824) 00:18:57.405 fused_ordering(825) 00:18:57.405 fused_ordering(826) 00:18:57.405 fused_ordering(827) 00:18:57.405 fused_ordering(828) 00:18:57.405 fused_ordering(829) 00:18:57.405 fused_ordering(830) 00:18:57.405 fused_ordering(831) 00:18:57.405 fused_ordering(832) 00:18:57.405 fused_ordering(833) 00:18:57.405 fused_ordering(834) 00:18:57.405 fused_ordering(835) 00:18:57.405 fused_ordering(836) 00:18:57.405 fused_ordering(837) 00:18:57.405 fused_ordering(838) 00:18:57.405 fused_ordering(839) 00:18:57.405 fused_ordering(840) 00:18:57.405 fused_ordering(841) 00:18:57.405 fused_ordering(842) 00:18:57.405 fused_ordering(843) 00:18:57.405 fused_ordering(844) 00:18:57.405 fused_ordering(845) 00:18:57.405 fused_ordering(846) 00:18:57.405 fused_ordering(847) 00:18:57.405 fused_ordering(848) 00:18:57.405 fused_ordering(849) 00:18:57.405 fused_ordering(850) 00:18:57.405 fused_ordering(851) 00:18:57.405 fused_ordering(852) 00:18:57.405 fused_ordering(853) 00:18:57.405 fused_ordering(854) 00:18:57.405 fused_ordering(855) 00:18:57.406 fused_ordering(856) 00:18:57.406 fused_ordering(857) 00:18:57.406 fused_ordering(858) 00:18:57.406 fused_ordering(859) 00:18:57.406 fused_ordering(860) 00:18:57.406 fused_ordering(861) 00:18:57.406 fused_ordering(862) 00:18:57.406 fused_ordering(863) 00:18:57.406 fused_ordering(864) 00:18:57.406 fused_ordering(865) 00:18:57.406 fused_ordering(866) 00:18:57.406 fused_ordering(867) 00:18:57.406 fused_ordering(868) 00:18:57.406 fused_ordering(869) 00:18:57.406 fused_ordering(870) 00:18:57.406 fused_ordering(871) 00:18:57.406 fused_ordering(872) 00:18:57.406 fused_ordering(873) 00:18:57.406 fused_ordering(874) 00:18:57.406 fused_ordering(875) 00:18:57.406 fused_ordering(876) 00:18:57.406 fused_ordering(877) 00:18:57.406 fused_ordering(878) 00:18:57.406 fused_ordering(879) 00:18:57.406 fused_ordering(880) 00:18:57.406 fused_ordering(881) 00:18:57.406 fused_ordering(882) 00:18:57.406 fused_ordering(883) 00:18:57.406 fused_ordering(884) 00:18:57.406 fused_ordering(885) 00:18:57.406 fused_ordering(886) 00:18:57.406 fused_ordering(887) 00:18:57.406 fused_ordering(888) 00:18:57.406 fused_ordering(889) 00:18:57.406 fused_ordering(890) 00:18:57.406 fused_ordering(891) 00:18:57.406 fused_ordering(892) 00:18:57.406 fused_ordering(893) 00:18:57.406 fused_ordering(894) 00:18:57.406 fused_ordering(895) 00:18:57.406 fused_ordering(896) 00:18:57.406 fused_ordering(897) 00:18:57.406 fused_ordering(898) 00:18:57.406 fused_ordering(899) 00:18:57.406 fused_ordering(900) 00:18:57.406 fused_ordering(901) 00:18:57.406 fused_ordering(902) 00:18:57.406 fused_ordering(903) 00:18:57.406 fused_ordering(904) 00:18:57.406 fused_ordering(905) 00:18:57.406 fused_ordering(906) 00:18:57.406 fused_ordering(907) 00:18:57.406 fused_ordering(908) 00:18:57.406 fused_ordering(909) 00:18:57.406 fused_ordering(910) 00:18:57.406 fused_ordering(911) 00:18:57.406 fused_ordering(912) 00:18:57.406 fused_ordering(913) 00:18:57.406 fused_ordering(914) 00:18:57.406 fused_ordering(915) 00:18:57.406 fused_ordering(916) 00:18:57.406 fused_ordering(917) 00:18:57.406 fused_ordering(918) 00:18:57.406 fused_ordering(919) 00:18:57.406 fused_ordering(920) 00:18:57.406 fused_ordering(921) 00:18:57.406 fused_ordering(922) 00:18:57.406 fused_ordering(923) 00:18:57.406 fused_ordering(924) 00:18:57.406 fused_ordering(925) 00:18:57.406 fused_ordering(926) 00:18:57.406 fused_ordering(927) 00:18:57.406 fused_ordering(928) 00:18:57.406 fused_ordering(929) 00:18:57.406 fused_ordering(930) 00:18:57.406 fused_ordering(931) 00:18:57.406 fused_ordering(932) 00:18:57.406 fused_ordering(933) 00:18:57.406 fused_ordering(934) 00:18:57.406 fused_ordering(935) 00:18:57.406 fused_ordering(936) 00:18:57.406 fused_ordering(937) 00:18:57.406 fused_ordering(938) 00:18:57.406 fused_ordering(939) 00:18:57.406 fused_ordering(940) 00:18:57.406 fused_ordering(941) 00:18:57.406 fused_ordering(942) 00:18:57.406 fused_ordering(943) 00:18:57.406 fused_ordering(944) 00:18:57.406 fused_ordering(945) 00:18:57.406 fused_ordering(946) 00:18:57.406 fused_ordering(947) 00:18:57.406 fused_ordering(948) 00:18:57.406 fused_ordering(949) 00:18:57.406 fused_ordering(950) 00:18:57.406 fused_ordering(951) 00:18:57.406 fused_ordering(952) 00:18:57.406 fused_ordering(953) 00:18:57.406 fused_ordering(954) 00:18:57.406 fused_ordering(955) 00:18:57.406 fused_ordering(956) 00:18:57.406 fused_ordering(957) 00:18:57.406 fused_ordering(958) 00:18:57.406 fused_ordering(959) 00:18:57.406 fused_ordering(960) 00:18:57.406 fused_ordering(961) 00:18:57.406 fused_ordering(962) 00:18:57.406 fused_ordering(963) 00:18:57.406 fused_ordering(964) 00:18:57.406 fused_ordering(965) 00:18:57.406 fused_ordering(966) 00:18:57.406 fused_ordering(967) 00:18:57.406 fused_ordering(968) 00:18:57.406 fused_ordering(969) 00:18:57.406 fused_ordering(970) 00:18:57.406 fused_ordering(971) 00:18:57.406 fused_ordering(972) 00:18:57.406 fused_ordering(973) 00:18:57.406 fused_ordering(974) 00:18:57.406 fused_ordering(975) 00:18:57.406 fused_ordering(976) 00:18:57.406 fused_ordering(977) 00:18:57.406 fused_ordering(978) 00:18:57.406 fused_ordering(979) 00:18:57.406 fused_ordering(980) 00:18:57.406 fused_ordering(981) 00:18:57.406 fused_ordering(982) 00:18:57.406 fused_ordering(983) 00:18:57.406 fused_ordering(984) 00:18:57.406 fused_ordering(985) 00:18:57.406 fused_ordering(986) 00:18:57.406 fused_ordering(987) 00:18:57.406 fused_ordering(988) 00:18:57.406 fused_ordering(989) 00:18:57.406 fused_ordering(990) 00:18:57.406 fused_ordering(991) 00:18:57.406 fused_ordering(992) 00:18:57.406 fused_ordering(993) 00:18:57.406 fused_ordering(994) 00:18:57.406 fused_ordering(995) 00:18:57.406 fused_ordering(996) 00:18:57.406 fused_ordering(997) 00:18:57.406 fused_ordering(998) 00:18:57.406 fused_ordering(999) 00:18:57.406 fused_ordering(1000) 00:18:57.406 fused_ordering(1001) 00:18:57.406 fused_ordering(1002) 00:18:57.406 fused_ordering(1003) 00:18:57.406 fused_ordering(1004) 00:18:57.406 fused_ordering(1005) 00:18:57.406 fused_ordering(1006) 00:18:57.406 fused_ordering(1007) 00:18:57.406 fused_ordering(1008) 00:18:57.406 fused_ordering(1009) 00:18:57.406 fused_ordering(1010) 00:18:57.406 fused_ordering(1011) 00:18:57.406 fused_ordering(1012) 00:18:57.406 fused_ordering(1013) 00:18:57.406 fused_ordering(1014) 00:18:57.406 fused_ordering(1015) 00:18:57.406 fused_ordering(1016) 00:18:57.406 fused_ordering(1017) 00:18:57.406 fused_ordering(1018) 00:18:57.406 fused_ordering(1019) 00:18:57.406 fused_ordering(1020) 00:18:57.406 fused_ordering(1021) 00:18:57.406 fused_ordering(1022) 00:18:57.406 fused_ordering(1023) 00:18:57.406 15:06:12 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:57.406 15:06:12 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:57.406 15:06:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:57.406 15:06:12 -- nvmf/common.sh@117 -- # sync 00:18:57.406 15:06:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:57.406 15:06:12 -- nvmf/common.sh@120 -- # set +e 00:18:57.406 15:06:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:57.406 15:06:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:57.406 rmmod nvme_tcp 00:18:57.406 rmmod nvme_fabrics 00:18:57.406 rmmod nvme_keyring 00:18:57.406 15:06:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:57.406 15:06:13 -- nvmf/common.sh@124 -- # set -e 00:18:57.406 15:06:13 -- nvmf/common.sh@125 -- # return 0 00:18:57.406 15:06:13 -- nvmf/common.sh@478 -- # '[' -n 70161 ']' 00:18:57.406 15:06:13 -- nvmf/common.sh@479 -- # killprocess 70161 00:18:57.406 15:06:13 -- common/autotest_common.sh@936 -- # '[' -z 70161 ']' 00:18:57.406 15:06:13 -- common/autotest_common.sh@940 -- # kill -0 70161 00:18:57.406 15:06:13 -- common/autotest_common.sh@941 -- # uname 00:18:57.406 15:06:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:57.406 15:06:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70161 00:18:57.406 15:06:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:57.406 15:06:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:57.406 killing process with pid 70161 00:18:57.406 15:06:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70161' 00:18:57.406 15:06:13 -- common/autotest_common.sh@955 -- # kill 70161 00:18:57.406 15:06:13 -- common/autotest_common.sh@960 -- # wait 70161 00:18:57.673 15:06:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:57.673 15:06:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:57.673 15:06:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:57.673 15:06:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:57.673 15:06:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:57.673 15:06:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.673 15:06:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.673 15:06:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.673 15:06:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:57.673 00:18:57.673 real 0m3.997s 00:18:57.673 user 0m4.449s 00:18:57.673 sys 0m1.478s 00:18:57.673 15:06:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:57.673 15:06:13 -- common/autotest_common.sh@10 -- # set +x 00:18:57.673 ************************************ 00:18:57.673 END TEST nvmf_fused_ordering 00:18:57.673 ************************************ 00:18:57.934 15:06:13 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:18:57.934 15:06:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:57.934 15:06:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:57.934 15:06:13 -- common/autotest_common.sh@10 -- # set +x 00:18:57.934 ************************************ 00:18:57.934 START TEST nvmf_delete_subsystem 00:18:57.934 ************************************ 00:18:57.934 15:06:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:18:57.934 * Looking for test storage... 00:18:57.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:57.934 15:06:13 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:57.934 15:06:13 -- nvmf/common.sh@7 -- # uname -s 00:18:57.934 15:06:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.934 15:06:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.934 15:06:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.934 15:06:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.934 15:06:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.934 15:06:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.934 15:06:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.934 15:06:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.934 15:06:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.934 15:06:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.934 15:06:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:18:57.934 15:06:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:18:57.934 15:06:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.934 15:06:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.934 15:06:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:57.934 15:06:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.194 15:06:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:58.194 15:06:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.194 15:06:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.194 15:06:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.194 15:06:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.194 15:06:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.194 15:06:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.194 15:06:13 -- paths/export.sh@5 -- # export PATH 00:18:58.194 15:06:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.194 15:06:13 -- nvmf/common.sh@47 -- # : 0 00:18:58.194 15:06:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:58.194 15:06:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:58.194 15:06:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.194 15:06:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.194 15:06:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.194 15:06:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:58.194 15:06:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:58.194 15:06:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:58.194 15:06:13 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:18:58.194 15:06:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:58.194 15:06:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.194 15:06:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:58.194 15:06:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:58.194 15:06:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:58.194 15:06:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.194 15:06:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.194 15:06:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.194 15:06:13 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:58.194 15:06:13 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:58.194 15:06:13 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:58.194 15:06:13 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:58.194 15:06:13 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:58.194 15:06:13 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:58.194 15:06:13 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.194 15:06:13 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.194 15:06:13 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:58.194 15:06:13 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:58.194 15:06:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:58.194 15:06:13 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:58.194 15:06:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:58.194 15:06:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.194 15:06:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:58.194 15:06:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:58.194 15:06:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:58.194 15:06:13 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:58.194 15:06:13 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:58.194 15:06:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:58.194 Cannot find device "nvmf_tgt_br" 00:18:58.194 15:06:13 -- nvmf/common.sh@155 -- # true 00:18:58.194 15:06:13 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:58.194 Cannot find device "nvmf_tgt_br2" 00:18:58.194 15:06:13 -- nvmf/common.sh@156 -- # true 00:18:58.194 15:06:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:58.194 15:06:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:58.194 Cannot find device "nvmf_tgt_br" 00:18:58.194 15:06:13 -- nvmf/common.sh@158 -- # true 00:18:58.194 15:06:13 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:58.194 Cannot find device "nvmf_tgt_br2" 00:18:58.194 15:06:13 -- nvmf/common.sh@159 -- # true 00:18:58.194 15:06:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:58.194 15:06:13 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:58.194 15:06:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:58.194 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.194 15:06:13 -- nvmf/common.sh@162 -- # true 00:18:58.194 15:06:13 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:58.194 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.194 15:06:13 -- nvmf/common.sh@163 -- # true 00:18:58.194 15:06:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:58.194 15:06:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:58.194 15:06:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:58.194 15:06:13 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:58.453 15:06:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:58.453 15:06:13 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:58.453 15:06:13 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:58.453 15:06:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:58.453 15:06:13 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:58.453 15:06:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:58.453 15:06:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:58.453 15:06:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:58.453 15:06:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:58.453 15:06:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:58.453 15:06:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:58.453 15:06:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:58.453 15:06:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:58.453 15:06:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:58.453 15:06:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:58.453 15:06:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:58.453 15:06:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:58.453 15:06:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:58.453 15:06:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:58.453 15:06:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:58.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:18:58.454 00:18:58.454 --- 10.0.0.2 ping statistics --- 00:18:58.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.454 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:18:58.454 15:06:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:58.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:58.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:18:58.454 00:18:58.454 --- 10.0.0.3 ping statistics --- 00:18:58.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.454 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:18:58.454 15:06:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:58.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:18:58.713 00:18:58.713 --- 10.0.0.1 ping statistics --- 00:18:58.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.713 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:58.713 15:06:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.713 15:06:14 -- nvmf/common.sh@422 -- # return 0 00:18:58.713 15:06:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:58.713 15:06:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.713 15:06:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:58.713 15:06:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:58.713 15:06:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.713 15:06:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:58.713 15:06:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:58.713 15:06:14 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:18:58.713 15:06:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:58.713 15:06:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:58.713 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:18:58.713 15:06:14 -- nvmf/common.sh@470 -- # nvmfpid=70405 00:18:58.713 15:06:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:58.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.713 15:06:14 -- nvmf/common.sh@471 -- # waitforlisten 70405 00:18:58.713 15:06:14 -- common/autotest_common.sh@817 -- # '[' -z 70405 ']' 00:18:58.713 15:06:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.713 15:06:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:58.713 15:06:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.713 15:06:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:58.713 15:06:14 -- common/autotest_common.sh@10 -- # set +x 00:18:58.713 [2024-04-18 15:06:14.267050] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:18:58.713 [2024-04-18 15:06:14.267159] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.713 [2024-04-18 15:06:14.409812] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:58.971 [2024-04-18 15:06:14.511317] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.971 [2024-04-18 15:06:14.511388] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.971 [2024-04-18 15:06:14.511402] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.971 [2024-04-18 15:06:14.511413] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.971 [2024-04-18 15:06:14.511423] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.971 [2024-04-18 15:06:14.511578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.971 [2024-04-18 15:06:14.511764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.582 15:06:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:59.582 15:06:15 -- common/autotest_common.sh@850 -- # return 0 00:18:59.582 15:06:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:59.582 15:06:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:59.582 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:18:59.582 15:06:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.582 15:06:15 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:59.582 15:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.582 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:18:59.582 [2024-04-18 15:06:15.241729] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.582 15:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.582 15:06:15 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:59.582 15:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.582 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:18:59.582 15:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.582 15:06:15 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.582 15:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.582 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:18:59.582 [2024-04-18 15:06:15.266258] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.582 15:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.582 15:06:15 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:59.582 15:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.582 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:18:59.582 NULL1 00:18:59.582 15:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.582 15:06:15 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:59.582 15:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.582 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:18:59.841 Delay0 00:18:59.841 15:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.841 15:06:15 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:59.841 15:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.841 15:06:15 -- common/autotest_common.sh@10 -- # set +x 00:18:59.841 15:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.841 15:06:15 -- target/delete_subsystem.sh@28 -- # perf_pid=70456 00:18:59.841 15:06:15 -- target/delete_subsystem.sh@30 -- # sleep 2 00:18:59.841 15:06:15 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:18:59.841 [2024-04-18 15:06:15.482447] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:01.745 15:06:17 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:01.745 15:06:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.745 15:06:17 -- common/autotest_common.sh@10 -- # set +x 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 starting I/O failed: -6 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 starting I/O failed: -6 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 starting I/O failed: -6 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 starting I/O failed: -6 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 starting I/O failed: -6 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 starting I/O failed: -6 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 starting I/O failed: -6 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 starting I/O failed: -6 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 starting I/O failed: -6 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 starting I/O failed: -6 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 starting I/O failed: -6 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Write completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 starting I/O failed: -6 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.003 Read completed with error (sct=0, sc=8) 00:19:02.004 [2024-04-18 15:06:17.509806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abcdf0 is same with the state(5) to be set 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 starting I/O failed: -6 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 starting I/O failed: -6 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 starting I/O failed: -6 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 starting I/O failed: -6 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 starting I/O failed: -6 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 starting I/O failed: -6 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 starting I/O failed: -6 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 starting I/O failed: -6 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 starting I/O failed: -6 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 starting I/O failed: -6 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 starting I/O failed: -6 00:19:02.004 [2024-04-18 15:06:17.514022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faecc000c00 is same with the state(5) to be set 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Write completed with error (sct=0, sc=8) 00:19:02.004 Read completed with error (sct=0, sc=8) 00:19:02.938 [2024-04-18 15:06:18.495326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adc710 is same with the state(5) to be set 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 [2024-04-18 15:06:18.509031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adb350 is same with the state(5) to be set 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 [2024-04-18 15:06:18.509277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abd0b0 is same with the state(5) to be set 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 [2024-04-18 15:06:18.512344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faecc00bf90 is same with the state(5) to be set 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Write completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 Read completed with error (sct=0, sc=8) 00:19:02.938 [2024-04-18 15:06:18.512587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faecc00c690 is same with the state(5) to be set 00:19:02.938 [2024-04-18 15:06:18.513762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc710 (9): Bad file descriptor 00:19:02.938 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:19:02.938 15:06:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.938 15:06:18 -- target/delete_subsystem.sh@34 -- # delay=0 00:19:02.938 15:06:18 -- target/delete_subsystem.sh@35 -- # kill -0 70456 00:19:02.938 15:06:18 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:19:02.938 Initializing NVMe Controllers 00:19:02.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:02.938 Controller IO queue size 128, less than required. 00:19:02.938 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:02.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:19:02.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:19:02.938 Initialization complete. Launching workers. 00:19:02.938 ======================================================== 00:19:02.938 Latency(us) 00:19:02.938 Device Information : IOPS MiB/s Average min max 00:19:02.939 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.20 0.09 879717.18 409.44 1007392.45 00:19:02.939 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.26 0.08 929095.73 523.22 2001196.11 00:19:02.939 ======================================================== 00:19:02.939 Total : 341.46 0.17 903470.71 409.44 2001196.11 00:19:02.939 00:19:03.508 15:06:19 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:19:03.508 15:06:19 -- target/delete_subsystem.sh@35 -- # kill -0 70456 00:19:03.508 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (70456) - No such process 00:19:03.508 15:06:19 -- target/delete_subsystem.sh@45 -- # NOT wait 70456 00:19:03.508 15:06:19 -- common/autotest_common.sh@638 -- # local es=0 00:19:03.508 15:06:19 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 70456 00:19:03.508 15:06:19 -- common/autotest_common.sh@626 -- # local arg=wait 00:19:03.508 15:06:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:03.508 15:06:19 -- common/autotest_common.sh@630 -- # type -t wait 00:19:03.508 15:06:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:03.508 15:06:19 -- common/autotest_common.sh@641 -- # wait 70456 00:19:03.508 15:06:19 -- common/autotest_common.sh@641 -- # es=1 00:19:03.508 15:06:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:03.508 15:06:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:03.508 15:06:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:03.508 15:06:19 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:03.508 15:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.508 15:06:19 -- common/autotest_common.sh@10 -- # set +x 00:19:03.508 15:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.508 15:06:19 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:03.508 15:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.508 15:06:19 -- common/autotest_common.sh@10 -- # set +x 00:19:03.508 [2024-04-18 15:06:19.061833] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.508 15:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.508 15:06:19 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:03.508 15:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.508 15:06:19 -- common/autotest_common.sh@10 -- # set +x 00:19:03.508 15:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.508 15:06:19 -- target/delete_subsystem.sh@54 -- # perf_pid=70506 00:19:03.508 15:06:19 -- target/delete_subsystem.sh@56 -- # delay=0 00:19:03.508 15:06:19 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:19:03.508 15:06:19 -- target/delete_subsystem.sh@57 -- # kill -0 70506 00:19:03.508 15:06:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:03.796 [2024-04-18 15:06:19.259933] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:04.055 15:06:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:04.055 15:06:19 -- target/delete_subsystem.sh@57 -- # kill -0 70506 00:19:04.055 15:06:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:04.621 15:06:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:04.621 15:06:20 -- target/delete_subsystem.sh@57 -- # kill -0 70506 00:19:04.621 15:06:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:05.189 15:06:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:05.189 15:06:20 -- target/delete_subsystem.sh@57 -- # kill -0 70506 00:19:05.189 15:06:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:05.448 15:06:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:05.448 15:06:21 -- target/delete_subsystem.sh@57 -- # kill -0 70506 00:19:05.448 15:06:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:06.017 15:06:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:06.017 15:06:21 -- target/delete_subsystem.sh@57 -- # kill -0 70506 00:19:06.017 15:06:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:06.584 15:06:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:06.584 15:06:22 -- target/delete_subsystem.sh@57 -- # kill -0 70506 00:19:06.584 15:06:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:06.844 Initializing NVMe Controllers 00:19:06.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:06.844 Controller IO queue size 128, less than required. 00:19:06.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:06.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:19:06.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:19:06.844 Initialization complete. Launching workers. 00:19:06.844 ======================================================== 00:19:06.844 Latency(us) 00:19:06.844 Device Information : IOPS MiB/s Average min max 00:19:06.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005329.47 1000169.89 1015107.84 00:19:06.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005118.03 1000117.46 1041515.34 00:19:06.844 ======================================================== 00:19:06.844 Total : 256.00 0.12 1005223.75 1000117.46 1041515.34 00:19:06.844 00:19:07.102 15:06:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:07.102 15:06:22 -- target/delete_subsystem.sh@57 -- # kill -0 70506 00:19:07.102 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (70506) - No such process 00:19:07.102 15:06:22 -- target/delete_subsystem.sh@67 -- # wait 70506 00:19:07.102 15:06:22 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:19:07.102 15:06:22 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:19:07.102 15:06:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:07.102 15:06:22 -- nvmf/common.sh@117 -- # sync 00:19:07.102 15:06:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:07.102 15:06:22 -- nvmf/common.sh@120 -- # set +e 00:19:07.102 15:06:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:07.102 15:06:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:07.102 rmmod nvme_tcp 00:19:07.102 rmmod nvme_fabrics 00:19:07.102 rmmod nvme_keyring 00:19:07.102 15:06:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:07.102 15:06:22 -- nvmf/common.sh@124 -- # set -e 00:19:07.102 15:06:22 -- nvmf/common.sh@125 -- # return 0 00:19:07.102 15:06:22 -- nvmf/common.sh@478 -- # '[' -n 70405 ']' 00:19:07.102 15:06:22 -- nvmf/common.sh@479 -- # killprocess 70405 00:19:07.102 15:06:22 -- common/autotest_common.sh@936 -- # '[' -z 70405 ']' 00:19:07.102 15:06:22 -- common/autotest_common.sh@940 -- # kill -0 70405 00:19:07.102 15:06:22 -- common/autotest_common.sh@941 -- # uname 00:19:07.102 15:06:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:07.102 15:06:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70405 00:19:07.102 15:06:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:07.102 15:06:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:07.102 15:06:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70405' 00:19:07.102 killing process with pid 70405 00:19:07.102 15:06:22 -- common/autotest_common.sh@955 -- # kill 70405 00:19:07.102 15:06:22 -- common/autotest_common.sh@960 -- # wait 70405 00:19:07.361 15:06:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:07.361 15:06:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:07.361 15:06:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:07.361 15:06:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.361 15:06:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:07.361 15:06:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.361 15:06:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.361 15:06:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.361 15:06:23 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:07.361 00:19:07.361 real 0m9.536s 00:19:07.361 user 0m28.020s 00:19:07.361 sys 0m2.418s 00:19:07.361 15:06:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:07.361 ************************************ 00:19:07.361 END TEST nvmf_delete_subsystem 00:19:07.361 ************************************ 00:19:07.361 15:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:07.697 15:06:23 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:19:07.697 15:06:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:07.697 15:06:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:07.697 15:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:07.697 ************************************ 00:19:07.697 START TEST nvmf_ns_masking 00:19:07.697 ************************************ 00:19:07.697 15:06:23 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:19:07.697 * Looking for test storage... 00:19:07.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:07.697 15:06:23 -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:07.697 15:06:23 -- nvmf/common.sh@7 -- # uname -s 00:19:07.697 15:06:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.697 15:06:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.698 15:06:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.698 15:06:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.698 15:06:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.698 15:06:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.698 15:06:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.698 15:06:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.698 15:06:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.698 15:06:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.698 15:06:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:19:07.698 15:06:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:19:07.698 15:06:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.698 15:06:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.698 15:06:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:07.698 15:06:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.698 15:06:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:07.698 15:06:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.698 15:06:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.698 15:06:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.698 15:06:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.698 15:06:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.698 15:06:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.698 15:06:23 -- paths/export.sh@5 -- # export PATH 00:19:07.698 15:06:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.698 15:06:23 -- nvmf/common.sh@47 -- # : 0 00:19:07.698 15:06:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:07.698 15:06:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:07.698 15:06:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.698 15:06:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.698 15:06:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.698 15:06:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:07.698 15:06:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:07.698 15:06:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:07.698 15:06:23 -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.698 15:06:23 -- target/ns_masking.sh@11 -- # loops=5 00:19:07.698 15:06:23 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:19:07.698 15:06:23 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:19:07.698 15:06:23 -- target/ns_masking.sh@15 -- # uuidgen 00:19:07.957 15:06:23 -- target/ns_masking.sh@15 -- # HOSTID=1dd78897-f318-4014-805b-89609bf362b4 00:19:07.957 15:06:23 -- target/ns_masking.sh@44 -- # nvmftestinit 00:19:07.957 15:06:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:07.957 15:06:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.957 15:06:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:07.957 15:06:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:07.957 15:06:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:07.957 15:06:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.957 15:06:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.957 15:06:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.957 15:06:23 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:07.957 15:06:23 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:07.957 15:06:23 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:07.957 15:06:23 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:07.957 15:06:23 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:07.957 15:06:23 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:07.957 15:06:23 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.957 15:06:23 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.957 15:06:23 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:07.957 15:06:23 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:07.957 15:06:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:07.957 15:06:23 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:07.957 15:06:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:07.957 15:06:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.957 15:06:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:07.957 15:06:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:07.957 15:06:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:07.957 15:06:23 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:07.957 15:06:23 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:07.957 15:06:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:07.957 Cannot find device "nvmf_tgt_br" 00:19:07.957 15:06:23 -- nvmf/common.sh@155 -- # true 00:19:07.957 15:06:23 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:07.957 Cannot find device "nvmf_tgt_br2" 00:19:07.957 15:06:23 -- nvmf/common.sh@156 -- # true 00:19:07.957 15:06:23 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:07.957 15:06:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:07.957 Cannot find device "nvmf_tgt_br" 00:19:07.957 15:06:23 -- nvmf/common.sh@158 -- # true 00:19:07.957 15:06:23 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:07.957 Cannot find device "nvmf_tgt_br2" 00:19:07.957 15:06:23 -- nvmf/common.sh@159 -- # true 00:19:07.957 15:06:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:07.958 15:06:23 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:07.958 15:06:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:07.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.958 15:06:23 -- nvmf/common.sh@162 -- # true 00:19:07.958 15:06:23 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:07.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:07.958 15:06:23 -- nvmf/common.sh@163 -- # true 00:19:07.958 15:06:23 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:07.958 15:06:23 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:07.958 15:06:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:07.958 15:06:23 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:07.958 15:06:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:07.958 15:06:23 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:07.958 15:06:23 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:08.216 15:06:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:08.216 15:06:23 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:08.216 15:06:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:08.216 15:06:23 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:08.216 15:06:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:08.216 15:06:23 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:08.216 15:06:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:08.216 15:06:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:08.216 15:06:23 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:08.216 15:06:23 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:08.216 15:06:23 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:08.216 15:06:23 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:08.216 15:06:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:08.216 15:06:23 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:08.216 15:06:23 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:08.216 15:06:23 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:08.216 15:06:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:08.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:19:08.216 00:19:08.216 --- 10.0.0.2 ping statistics --- 00:19:08.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.216 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:19:08.216 15:06:23 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:08.216 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:08.216 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:19:08.216 00:19:08.216 --- 10.0.0.3 ping statistics --- 00:19:08.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.216 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:19:08.216 15:06:23 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:08.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:08.216 00:19:08.216 --- 10.0.0.1 ping statistics --- 00:19:08.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.216 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:08.216 15:06:23 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.216 15:06:23 -- nvmf/common.sh@422 -- # return 0 00:19:08.216 15:06:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:08.216 15:06:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.216 15:06:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:08.216 15:06:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:08.216 15:06:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.216 15:06:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:08.216 15:06:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:08.216 15:06:23 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:19:08.216 15:06:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:08.216 15:06:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:08.216 15:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:08.216 15:06:23 -- nvmf/common.sh@470 -- # nvmfpid=70747 00:19:08.216 15:06:23 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:08.216 15:06:23 -- nvmf/common.sh@471 -- # waitforlisten 70747 00:19:08.216 15:06:23 -- common/autotest_common.sh@817 -- # '[' -z 70747 ']' 00:19:08.216 15:06:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.216 15:06:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:08.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.216 15:06:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.216 15:06:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:08.216 15:06:23 -- common/autotest_common.sh@10 -- # set +x 00:19:08.216 [2024-04-18 15:06:23.881620] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:19:08.216 [2024-04-18 15:06:23.881703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.475 [2024-04-18 15:06:24.026218] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:08.475 [2024-04-18 15:06:24.128229] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.475 [2024-04-18 15:06:24.128300] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.475 [2024-04-18 15:06:24.128312] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.475 [2024-04-18 15:06:24.128321] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.475 [2024-04-18 15:06:24.128330] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.475 [2024-04-18 15:06:24.128477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.475 [2024-04-18 15:06:24.128610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.475 [2024-04-18 15:06:24.129309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.475 [2024-04-18 15:06:24.129316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.410 15:06:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:09.410 15:06:24 -- common/autotest_common.sh@850 -- # return 0 00:19:09.410 15:06:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:09.410 15:06:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:09.410 15:06:24 -- common/autotest_common.sh@10 -- # set +x 00:19:09.410 15:06:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.410 15:06:24 -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:09.410 [2024-04-18 15:06:25.080723] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.669 15:06:25 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:19:09.669 15:06:25 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:19:09.669 15:06:25 -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:09.669 Malloc1 00:19:09.669 15:06:25 -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:09.928 Malloc2 00:19:09.928 15:06:25 -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:10.187 15:06:25 -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:10.445 15:06:25 -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.445 [2024-04-18 15:06:26.108204] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.445 15:06:26 -- target/ns_masking.sh@61 -- # connect 00:19:10.445 15:06:26 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1dd78897-f318-4014-805b-89609bf362b4 -a 10.0.0.2 -s 4420 -i 4 00:19:10.704 15:06:26 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:19:10.704 15:06:26 -- common/autotest_common.sh@1184 -- # local i=0 00:19:10.704 15:06:26 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:10.704 15:06:26 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:10.704 15:06:26 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:12.608 15:06:28 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:12.608 15:06:28 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:12.608 15:06:28 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:12.608 15:06:28 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:12.608 15:06:28 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:12.608 15:06:28 -- common/autotest_common.sh@1194 -- # return 0 00:19:12.608 15:06:28 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:19:12.608 15:06:28 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:12.867 15:06:28 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:19:12.867 15:06:28 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:19:12.867 15:06:28 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:19:12.867 15:06:28 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:12.867 15:06:28 -- target/ns_masking.sh@39 -- # grep 0x1 00:19:12.867 [ 0]:0x1 00:19:12.867 15:06:28 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:12.867 15:06:28 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:12.867 15:06:28 -- target/ns_masking.sh@40 -- # nguid=b3fa6ede4d13429995b5e518154f30c2 00:19:12.867 15:06:28 -- target/ns_masking.sh@41 -- # [[ b3fa6ede4d13429995b5e518154f30c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:12.867 15:06:28 -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:13.125 15:06:28 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:19:13.125 15:06:28 -- target/ns_masking.sh@39 -- # grep 0x1 00:19:13.126 15:06:28 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:13.126 [ 0]:0x1 00:19:13.126 15:06:28 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:13.126 15:06:28 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:13.126 15:06:28 -- target/ns_masking.sh@40 -- # nguid=b3fa6ede4d13429995b5e518154f30c2 00:19:13.126 15:06:28 -- target/ns_masking.sh@41 -- # [[ b3fa6ede4d13429995b5e518154f30c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:13.126 15:06:28 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:19:13.126 15:06:28 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:13.126 15:06:28 -- target/ns_masking.sh@39 -- # grep 0x2 00:19:13.126 [ 1]:0x2 00:19:13.126 15:06:28 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:13.126 15:06:28 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:13.126 15:06:28 -- target/ns_masking.sh@40 -- # nguid=5f0d0836642a4aed905afe76e44929de 00:19:13.126 15:06:28 -- target/ns_masking.sh@41 -- # [[ 5f0d0836642a4aed905afe76e44929de != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:13.126 15:06:28 -- target/ns_masking.sh@69 -- # disconnect 00:19:13.126 15:06:28 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:13.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:13.383 15:06:28 -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:13.384 15:06:29 -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:13.641 15:06:29 -- target/ns_masking.sh@77 -- # connect 1 00:19:13.641 15:06:29 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1dd78897-f318-4014-805b-89609bf362b4 -a 10.0.0.2 -s 4420 -i 4 00:19:13.900 15:06:29 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:13.900 15:06:29 -- common/autotest_common.sh@1184 -- # local i=0 00:19:13.900 15:06:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:13.900 15:06:29 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:19:13.900 15:06:29 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:19:13.900 15:06:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:15.808 15:06:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:15.808 15:06:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:15.808 15:06:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:15.808 15:06:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:15.808 15:06:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:15.808 15:06:31 -- common/autotest_common.sh@1194 -- # return 0 00:19:15.808 15:06:31 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:19:15.808 15:06:31 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:15.808 15:06:31 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:19:15.808 15:06:31 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:19:15.808 15:06:31 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:19:15.808 15:06:31 -- common/autotest_common.sh@638 -- # local es=0 00:19:15.808 15:06:31 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:19:15.808 15:06:31 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:19:15.808 15:06:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:15.808 15:06:31 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:19:15.808 15:06:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:15.808 15:06:31 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:19:15.808 15:06:31 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:15.808 15:06:31 -- target/ns_masking.sh@39 -- # grep 0x1 00:19:16.067 15:06:31 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:16.067 15:06:31 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:16.067 15:06:31 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:19:16.067 15:06:31 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:16.067 15:06:31 -- common/autotest_common.sh@641 -- # es=1 00:19:16.067 15:06:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:16.067 15:06:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:16.067 15:06:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:16.067 15:06:31 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:19:16.067 15:06:31 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:16.067 15:06:31 -- target/ns_masking.sh@39 -- # grep 0x2 00:19:16.067 [ 0]:0x2 00:19:16.067 15:06:31 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:16.067 15:06:31 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:16.067 15:06:31 -- target/ns_masking.sh@40 -- # nguid=5f0d0836642a4aed905afe76e44929de 00:19:16.067 15:06:31 -- target/ns_masking.sh@41 -- # [[ 5f0d0836642a4aed905afe76e44929de != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:16.067 15:06:31 -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:16.326 15:06:31 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:19:16.326 15:06:31 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:16.326 15:06:31 -- target/ns_masking.sh@39 -- # grep 0x1 00:19:16.326 [ 0]:0x1 00:19:16.326 15:06:31 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:16.326 15:06:31 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:16.326 15:06:31 -- target/ns_masking.sh@40 -- # nguid=b3fa6ede4d13429995b5e518154f30c2 00:19:16.326 15:06:31 -- target/ns_masking.sh@41 -- # [[ b3fa6ede4d13429995b5e518154f30c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:16.326 15:06:31 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:19:16.326 15:06:31 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:16.326 15:06:31 -- target/ns_masking.sh@39 -- # grep 0x2 00:19:16.326 [ 1]:0x2 00:19:16.326 15:06:31 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:16.326 15:06:31 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:16.326 15:06:31 -- target/ns_masking.sh@40 -- # nguid=5f0d0836642a4aed905afe76e44929de 00:19:16.326 15:06:31 -- target/ns_masking.sh@41 -- # [[ 5f0d0836642a4aed905afe76e44929de != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:16.326 15:06:31 -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:16.585 15:06:32 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:19:16.585 15:06:32 -- common/autotest_common.sh@638 -- # local es=0 00:19:16.585 15:06:32 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:19:16.585 15:06:32 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:19:16.585 15:06:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:16.585 15:06:32 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:19:16.585 15:06:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:16.585 15:06:32 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:19:16.585 15:06:32 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:16.585 15:06:32 -- target/ns_masking.sh@39 -- # grep 0x1 00:19:16.585 15:06:32 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:16.585 15:06:32 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:16.585 15:06:32 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:19:16.585 15:06:32 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:16.585 15:06:32 -- common/autotest_common.sh@641 -- # es=1 00:19:16.585 15:06:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:16.585 15:06:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:16.585 15:06:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:16.585 15:06:32 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:19:16.585 15:06:32 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:16.585 15:06:32 -- target/ns_masking.sh@39 -- # grep 0x2 00:19:16.585 [ 0]:0x2 00:19:16.585 15:06:32 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:16.585 15:06:32 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:16.585 15:06:32 -- target/ns_masking.sh@40 -- # nguid=5f0d0836642a4aed905afe76e44929de 00:19:16.585 15:06:32 -- target/ns_masking.sh@41 -- # [[ 5f0d0836642a4aed905afe76e44929de != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:16.585 15:06:32 -- target/ns_masking.sh@91 -- # disconnect 00:19:16.585 15:06:32 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:16.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:16.844 15:06:32 -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:16.844 15:06:32 -- target/ns_masking.sh@95 -- # connect 2 00:19:16.844 15:06:32 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1dd78897-f318-4014-805b-89609bf362b4 -a 10.0.0.2 -s 4420 -i 4 00:19:17.102 15:06:32 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:17.102 15:06:32 -- common/autotest_common.sh@1184 -- # local i=0 00:19:17.102 15:06:32 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:17.102 15:06:32 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:19:17.102 15:06:32 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:19:17.102 15:06:32 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:19.012 15:06:34 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:19.012 15:06:34 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:19.012 15:06:34 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:19.012 15:06:34 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:19:19.012 15:06:34 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:19.012 15:06:34 -- common/autotest_common.sh@1194 -- # return 0 00:19:19.012 15:06:34 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:19.012 15:06:34 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:19:19.012 15:06:34 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:19:19.012 15:06:34 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:19:19.012 15:06:34 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:19:19.012 15:06:34 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:19.012 15:06:34 -- target/ns_masking.sh@39 -- # grep 0x1 00:19:19.012 [ 0]:0x1 00:19:19.012 15:06:34 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:19.012 15:06:34 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:19.270 15:06:34 -- target/ns_masking.sh@40 -- # nguid=b3fa6ede4d13429995b5e518154f30c2 00:19:19.270 15:06:34 -- target/ns_masking.sh@41 -- # [[ b3fa6ede4d13429995b5e518154f30c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:19.270 15:06:34 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:19:19.270 15:06:34 -- target/ns_masking.sh@39 -- # grep 0x2 00:19:19.270 15:06:34 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:19.270 [ 1]:0x2 00:19:19.270 15:06:34 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:19.270 15:06:34 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:19.270 15:06:34 -- target/ns_masking.sh@40 -- # nguid=5f0d0836642a4aed905afe76e44929de 00:19:19.270 15:06:34 -- target/ns_masking.sh@41 -- # [[ 5f0d0836642a4aed905afe76e44929de != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:19.270 15:06:34 -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:19.529 15:06:35 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:19:19.529 15:06:35 -- common/autotest_common.sh@638 -- # local es=0 00:19:19.529 15:06:35 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:19:19.529 15:06:35 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:19:19.529 15:06:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.529 15:06:35 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:19:19.529 15:06:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.529 15:06:35 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:19:19.529 15:06:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:19.529 15:06:35 -- target/ns_masking.sh@39 -- # grep 0x1 00:19:19.529 15:06:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:19.529 15:06:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:19.529 15:06:35 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:19:19.529 15:06:35 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:19.529 15:06:35 -- common/autotest_common.sh@641 -- # es=1 00:19:19.529 15:06:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:19.529 15:06:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:19.529 15:06:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:19.529 15:06:35 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:19:19.529 15:06:35 -- target/ns_masking.sh@39 -- # grep 0x2 00:19:19.529 15:06:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:19.529 [ 0]:0x2 00:19:19.529 15:06:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:19.529 15:06:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:19.529 15:06:35 -- target/ns_masking.sh@40 -- # nguid=5f0d0836642a4aed905afe76e44929de 00:19:19.529 15:06:35 -- target/ns_masking.sh@41 -- # [[ 5f0d0836642a4aed905afe76e44929de != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:19.529 15:06:35 -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:19.529 15:06:35 -- common/autotest_common.sh@638 -- # local es=0 00:19:19.529 15:06:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:19.529 15:06:35 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.529 15:06:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.529 15:06:35 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.529 15:06:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.529 15:06:35 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.529 15:06:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.529 15:06:35 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.529 15:06:35 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:19.529 15:06:35 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:19.788 [2024-04-18 15:06:35.356170] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:19.788 2024/04/18 15:06:35 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:19:19.788 request: 00:19:19.788 { 00:19:19.788 "method": "nvmf_ns_remove_host", 00:19:19.788 "params": { 00:19:19.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.788 "nsid": 2, 00:19:19.788 "host": "nqn.2016-06.io.spdk:host1" 00:19:19.788 } 00:19:19.788 } 00:19:19.788 Got JSON-RPC error response 00:19:19.788 GoRPCClient: error on JSON-RPC call 00:19:19.788 15:06:35 -- common/autotest_common.sh@641 -- # es=1 00:19:19.788 15:06:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:19.788 15:06:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:19.788 15:06:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:19.788 15:06:35 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:19:19.788 15:06:35 -- common/autotest_common.sh@638 -- # local es=0 00:19:19.788 15:06:35 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:19:19.788 15:06:35 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:19:19.788 15:06:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.788 15:06:35 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:19:19.788 15:06:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.788 15:06:35 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:19:19.788 15:06:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:19.788 15:06:35 -- target/ns_masking.sh@39 -- # grep 0x1 00:19:19.788 15:06:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:19.788 15:06:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:19.788 15:06:35 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:19:19.788 15:06:35 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:19.788 15:06:35 -- common/autotest_common.sh@641 -- # es=1 00:19:19.788 15:06:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:19.788 15:06:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:19.788 15:06:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:19.788 15:06:35 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:19:19.788 15:06:35 -- target/ns_masking.sh@39 -- # grep 0x2 00:19:19.788 15:06:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:19.788 [ 0]:0x2 00:19:19.788 15:06:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:19.788 15:06:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:20.047 15:06:35 -- target/ns_masking.sh@40 -- # nguid=5f0d0836642a4aed905afe76e44929de 00:19:20.047 15:06:35 -- target/ns_masking.sh@41 -- # [[ 5f0d0836642a4aed905afe76e44929de != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:20.047 15:06:35 -- target/ns_masking.sh@108 -- # disconnect 00:19:20.047 15:06:35 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:20.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:20.047 15:06:35 -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:20.305 15:06:35 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:20.305 15:06:35 -- target/ns_masking.sh@114 -- # nvmftestfini 00:19:20.305 15:06:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:20.305 15:06:35 -- nvmf/common.sh@117 -- # sync 00:19:20.305 15:06:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:20.305 15:06:35 -- nvmf/common.sh@120 -- # set +e 00:19:20.305 15:06:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:20.305 15:06:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:20.305 rmmod nvme_tcp 00:19:20.305 rmmod nvme_fabrics 00:19:20.305 rmmod nvme_keyring 00:19:20.305 15:06:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:20.305 15:06:35 -- nvmf/common.sh@124 -- # set -e 00:19:20.305 15:06:35 -- nvmf/common.sh@125 -- # return 0 00:19:20.305 15:06:35 -- nvmf/common.sh@478 -- # '[' -n 70747 ']' 00:19:20.305 15:06:35 -- nvmf/common.sh@479 -- # killprocess 70747 00:19:20.305 15:06:35 -- common/autotest_common.sh@936 -- # '[' -z 70747 ']' 00:19:20.305 15:06:35 -- common/autotest_common.sh@940 -- # kill -0 70747 00:19:20.305 15:06:35 -- common/autotest_common.sh@941 -- # uname 00:19:20.305 15:06:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:20.305 15:06:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70747 00:19:20.305 15:06:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:20.305 15:06:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:20.305 killing process with pid 70747 00:19:20.305 15:06:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70747' 00:19:20.305 15:06:35 -- common/autotest_common.sh@955 -- # kill 70747 00:19:20.305 15:06:35 -- common/autotest_common.sh@960 -- # wait 70747 00:19:20.563 15:06:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:20.563 15:06:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:20.563 15:06:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:20.563 15:06:36 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:20.563 15:06:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:20.563 15:06:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.563 15:06:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.563 15:06:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.563 15:06:36 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:20.563 00:19:20.563 real 0m13.040s 00:19:20.563 user 0m50.315s 00:19:20.563 sys 0m3.037s 00:19:20.563 15:06:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:20.563 ************************************ 00:19:20.563 15:06:36 -- common/autotest_common.sh@10 -- # set +x 00:19:20.563 END TEST nvmf_ns_masking 00:19:20.563 ************************************ 00:19:20.822 15:06:36 -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:19:20.822 15:06:36 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:19:20.822 15:06:36 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:19:20.822 15:06:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:20.822 15:06:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:20.822 15:06:36 -- common/autotest_common.sh@10 -- # set +x 00:19:20.822 ************************************ 00:19:20.822 START TEST nvmf_host_management 00:19:20.822 ************************************ 00:19:20.822 15:06:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:19:20.822 * Looking for test storage... 00:19:20.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:20.822 15:06:36 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:20.822 15:06:36 -- nvmf/common.sh@7 -- # uname -s 00:19:20.822 15:06:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.822 15:06:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.822 15:06:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.822 15:06:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.822 15:06:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.822 15:06:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.822 15:06:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.822 15:06:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.822 15:06:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.822 15:06:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.822 15:06:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:19:20.822 15:06:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:19:20.822 15:06:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.822 15:06:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.822 15:06:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:20.822 15:06:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.080 15:06:36 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:21.080 15:06:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.080 15:06:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.080 15:06:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.081 15:06:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.081 15:06:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.081 15:06:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.081 15:06:36 -- paths/export.sh@5 -- # export PATH 00:19:21.081 15:06:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.081 15:06:36 -- nvmf/common.sh@47 -- # : 0 00:19:21.081 15:06:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:21.081 15:06:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:21.081 15:06:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.081 15:06:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.081 15:06:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.081 15:06:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:21.081 15:06:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:21.081 15:06:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:21.081 15:06:36 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:21.081 15:06:36 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:21.081 15:06:36 -- target/host_management.sh@105 -- # nvmftestinit 00:19:21.081 15:06:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:21.081 15:06:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.081 15:06:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:21.081 15:06:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:21.081 15:06:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:21.081 15:06:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.081 15:06:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.081 15:06:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.081 15:06:36 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:21.081 15:06:36 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:21.081 15:06:36 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:21.081 15:06:36 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:21.081 15:06:36 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:21.081 15:06:36 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:21.081 15:06:36 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.081 15:06:36 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.081 15:06:36 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:21.081 15:06:36 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:21.081 15:06:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:21.081 15:06:36 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:21.081 15:06:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:21.081 15:06:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.081 15:06:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:21.081 15:06:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:21.081 15:06:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:21.081 15:06:36 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:21.081 15:06:36 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:21.081 15:06:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:21.081 Cannot find device "nvmf_tgt_br" 00:19:21.081 15:06:36 -- nvmf/common.sh@155 -- # true 00:19:21.081 15:06:36 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:21.081 Cannot find device "nvmf_tgt_br2" 00:19:21.081 15:06:36 -- nvmf/common.sh@156 -- # true 00:19:21.081 15:06:36 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:21.081 15:06:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:21.081 Cannot find device "nvmf_tgt_br" 00:19:21.081 15:06:36 -- nvmf/common.sh@158 -- # true 00:19:21.081 15:06:36 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:21.081 Cannot find device "nvmf_tgt_br2" 00:19:21.081 15:06:36 -- nvmf/common.sh@159 -- # true 00:19:21.081 15:06:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:21.081 15:06:36 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:21.081 15:06:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:21.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.081 15:06:36 -- nvmf/common.sh@162 -- # true 00:19:21.081 15:06:36 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:21.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.081 15:06:36 -- nvmf/common.sh@163 -- # true 00:19:21.081 15:06:36 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:21.081 15:06:36 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:21.081 15:06:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:21.081 15:06:36 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:21.081 15:06:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:21.081 15:06:36 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:21.340 15:06:36 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:21.340 15:06:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:21.340 15:06:36 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:21.340 15:06:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:21.340 15:06:36 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:21.340 15:06:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:21.340 15:06:36 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:21.340 15:06:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:21.340 15:06:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:21.340 15:06:36 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:21.340 15:06:36 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:21.340 15:06:36 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:21.340 15:06:36 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:21.340 15:06:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:21.340 15:06:36 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:21.340 15:06:36 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:21.340 15:06:36 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:21.340 15:06:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:21.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:19:21.340 00:19:21.340 --- 10.0.0.2 ping statistics --- 00:19:21.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.340 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:19:21.340 15:06:36 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:21.340 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:21.340 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:19:21.340 00:19:21.340 --- 10.0.0.3 ping statistics --- 00:19:21.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.340 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:21.340 15:06:36 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:21.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:19:21.340 00:19:21.340 --- 10.0.0.1 ping statistics --- 00:19:21.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.340 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:19:21.340 15:06:36 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.340 15:06:36 -- nvmf/common.sh@422 -- # return 0 00:19:21.340 15:06:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:21.340 15:06:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.340 15:06:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:21.340 15:06:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:21.340 15:06:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.340 15:06:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:21.340 15:06:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:21.340 15:06:36 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:19:21.340 15:06:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:21.340 15:06:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:21.340 15:06:36 -- common/autotest_common.sh@10 -- # set +x 00:19:21.599 ************************************ 00:19:21.599 START TEST nvmf_host_management 00:19:21.599 ************************************ 00:19:21.599 15:06:37 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:19:21.599 15:06:37 -- target/host_management.sh@69 -- # starttarget 00:19:21.599 15:06:37 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:19:21.599 15:06:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:21.599 15:06:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:21.599 15:06:37 -- common/autotest_common.sh@10 -- # set +x 00:19:21.599 15:06:37 -- nvmf/common.sh@470 -- # nvmfpid=71311 00:19:21.599 15:06:37 -- nvmf/common.sh@471 -- # waitforlisten 71311 00:19:21.599 15:06:37 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:21.599 15:06:37 -- common/autotest_common.sh@817 -- # '[' -z 71311 ']' 00:19:21.599 15:06:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.599 15:06:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:21.599 15:06:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.599 15:06:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:21.599 15:06:37 -- common/autotest_common.sh@10 -- # set +x 00:19:21.599 [2024-04-18 15:06:37.151984] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:19:21.599 [2024-04-18 15:06:37.152107] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.599 [2024-04-18 15:06:37.296329] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:21.858 [2024-04-18 15:06:37.394706] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.858 [2024-04-18 15:06:37.394774] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.858 [2024-04-18 15:06:37.394785] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.858 [2024-04-18 15:06:37.394794] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.858 [2024-04-18 15:06:37.394802] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.858 [2024-04-18 15:06:37.395034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.858 [2024-04-18 15:06:37.396947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:21.858 [2024-04-18 15:06:37.396989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:21.858 [2024-04-18 15:06:37.396994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.427 15:06:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:22.427 15:06:38 -- common/autotest_common.sh@850 -- # return 0 00:19:22.427 15:06:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:22.427 15:06:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:22.427 15:06:38 -- common/autotest_common.sh@10 -- # set +x 00:19:22.427 15:06:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.427 15:06:38 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:22.427 15:06:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.427 15:06:38 -- common/autotest_common.sh@10 -- # set +x 00:19:22.427 [2024-04-18 15:06:38.085035] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.427 15:06:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.427 15:06:38 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:19:22.427 15:06:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:22.427 15:06:38 -- common/autotest_common.sh@10 -- # set +x 00:19:22.687 15:06:38 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:19:22.687 15:06:38 -- target/host_management.sh@23 -- # cat 00:19:22.687 15:06:38 -- target/host_management.sh@30 -- # rpc_cmd 00:19:22.687 15:06:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.687 15:06:38 -- common/autotest_common.sh@10 -- # set +x 00:19:22.687 Malloc0 00:19:22.687 [2024-04-18 15:06:38.185809] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.687 15:06:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.687 15:06:38 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:19:22.687 15:06:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:22.687 15:06:38 -- common/autotest_common.sh@10 -- # set +x 00:19:22.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.687 15:06:38 -- target/host_management.sh@73 -- # perfpid=71383 00:19:22.687 15:06:38 -- target/host_management.sh@74 -- # waitforlisten 71383 /var/tmp/bdevperf.sock 00:19:22.687 15:06:38 -- common/autotest_common.sh@817 -- # '[' -z 71383 ']' 00:19:22.687 15:06:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.687 15:06:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:22.687 15:06:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.687 15:06:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:22.687 15:06:38 -- common/autotest_common.sh@10 -- # set +x 00:19:22.687 15:06:38 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:19:22.688 15:06:38 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:22.688 15:06:38 -- nvmf/common.sh@521 -- # config=() 00:19:22.688 15:06:38 -- nvmf/common.sh@521 -- # local subsystem config 00:19:22.688 15:06:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:22.688 15:06:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:22.688 { 00:19:22.688 "params": { 00:19:22.688 "name": "Nvme$subsystem", 00:19:22.688 "trtype": "$TEST_TRANSPORT", 00:19:22.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.688 "adrfam": "ipv4", 00:19:22.688 "trsvcid": "$NVMF_PORT", 00:19:22.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.688 "hdgst": ${hdgst:-false}, 00:19:22.688 "ddgst": ${ddgst:-false} 00:19:22.688 }, 00:19:22.688 "method": "bdev_nvme_attach_controller" 00:19:22.688 } 00:19:22.688 EOF 00:19:22.688 )") 00:19:22.688 15:06:38 -- nvmf/common.sh@543 -- # cat 00:19:22.688 15:06:38 -- nvmf/common.sh@545 -- # jq . 00:19:22.688 15:06:38 -- nvmf/common.sh@546 -- # IFS=, 00:19:22.688 15:06:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:22.688 "params": { 00:19:22.688 "name": "Nvme0", 00:19:22.688 "trtype": "tcp", 00:19:22.688 "traddr": "10.0.0.2", 00:19:22.688 "adrfam": "ipv4", 00:19:22.688 "trsvcid": "4420", 00:19:22.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:22.688 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:22.688 "hdgst": false, 00:19:22.688 "ddgst": false 00:19:22.688 }, 00:19:22.688 "method": "bdev_nvme_attach_controller" 00:19:22.688 }' 00:19:22.688 [2024-04-18 15:06:38.308895] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:19:22.688 [2024-04-18 15:06:38.309009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71383 ] 00:19:22.948 [2024-04-18 15:06:38.452164] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.948 [2024-04-18 15:06:38.560140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.207 Running I/O for 10 seconds... 00:19:23.777 15:06:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:23.777 15:06:39 -- common/autotest_common.sh@850 -- # return 0 00:19:23.777 15:06:39 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:23.777 15:06:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.777 15:06:39 -- common/autotest_common.sh@10 -- # set +x 00:19:23.777 15:06:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.777 15:06:39 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:23.777 15:06:39 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:19:23.777 15:06:39 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:23.777 15:06:39 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:19:23.777 15:06:39 -- target/host_management.sh@52 -- # local ret=1 00:19:23.777 15:06:39 -- target/host_management.sh@53 -- # local i 00:19:23.777 15:06:39 -- target/host_management.sh@54 -- # (( i = 10 )) 00:19:23.777 15:06:39 -- target/host_management.sh@54 -- # (( i != 0 )) 00:19:23.777 15:06:39 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:19:23.777 15:06:39 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:19:23.777 15:06:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.777 15:06:39 -- common/autotest_common.sh@10 -- # set +x 00:19:23.777 15:06:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.777 15:06:39 -- target/host_management.sh@55 -- # read_io_count=1053 00:19:23.777 15:06:39 -- target/host_management.sh@58 -- # '[' 1053 -ge 100 ']' 00:19:23.777 15:06:39 -- target/host_management.sh@59 -- # ret=0 00:19:23.777 15:06:39 -- target/host_management.sh@60 -- # break 00:19:23.777 15:06:39 -- target/host_management.sh@64 -- # return 0 00:19:23.777 15:06:39 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:19:23.777 15:06:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.777 15:06:39 -- common/autotest_common.sh@10 -- # set +x 00:19:23.777 task offset: 17152 on job bdev=Nvme0n1 fails 00:19:23.777 00:19:23.777 Latency(us) 00:19:23.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.777 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:23.777 Job: Nvme0n1 ended in about 0.56 seconds with error 00:19:23.777 Verification LBA range: start 0x0 length 0x400 00:19:23.777 Nvme0n1 : 0.56 2049.42 128.09 113.86 0.00 28926.79 1829.22 26740.79 00:19:23.777 =================================================================================================================== 00:19:23.777 Total : 2049.42 128.09 113.86 0.00 28926.79 1829.22 26740.79 00:19:23.777 [2024-04-18 15:06:39.277232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.777 [2024-04-18 15:06:39.277280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.777 [2024-04-18 15:06:39.277301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.777 [2024-04-18 15:06:39.277310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.777 [2024-04-18 15:06:39.277322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.777 [2024-04-18 15:06:39.277331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.777 [2024-04-18 15:06:39.277342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.777 [2024-04-18 15:06:39.277350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.777 [2024-04-18 15:06:39.277361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.777 [2024-04-18 15:06:39.277370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.778 [2024-04-18 15:06:39.277984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.778 [2024-04-18 15:06:39.277993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.779 [2024-04-18 15:06:39.278473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.278552] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19648b0 was disconnected and freed. reset controller. 00:19:23.779 [2024-04-18 15:06:39.279424] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:23.779 [2024-04-18 15:06:39.281718] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:23.779 15:06:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.779 [2024-04-18 15:06:39.281895] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1962b00 (9): Bad file descriptor 00:19:23.779 15:06:39 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:19:23.779 15:06:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.779 15:06:39 -- common/autotest_common.sh@10 -- # set +x 00:19:23.779 [2024-04-18 15:06:39.284483] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:19:23.779 [2024-04-18 15:06:39.284741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:23.779 [2024-04-18 15:06:39.284899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.779 [2024-04-18 15:06:39.285018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:19:23.779 [2024-04-18 15:06:39.285111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:19:23.779 [2024-04-18 15:06:39.285170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:19:23.779 [2024-04-18 15:06:39.285293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1962b00 00:19:23.779 [2024-04-18 15:06:39.285420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1962b00 (9): Bad file descriptor 00:19:23.779 [2024-04-18 15:06:39.285572] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:23.779 [2024-04-18 15:06:39.285722] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:23.779 [2024-04-18 15:06:39.285821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:23.779 [2024-04-18 15:06:39.285895] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:23.779 15:06:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.780 15:06:39 -- target/host_management.sh@87 -- # sleep 1 00:19:24.718 15:06:40 -- target/host_management.sh@91 -- # kill -9 71383 00:19:24.718 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (71383) - No such process 00:19:24.718 15:06:40 -- target/host_management.sh@91 -- # true 00:19:24.718 15:06:40 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:19:24.718 15:06:40 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:24.718 15:06:40 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:19:24.718 15:06:40 -- nvmf/common.sh@521 -- # config=() 00:19:24.718 15:06:40 -- nvmf/common.sh@521 -- # local subsystem config 00:19:24.718 15:06:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:24.718 15:06:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:24.718 { 00:19:24.718 "params": { 00:19:24.718 "name": "Nvme$subsystem", 00:19:24.718 "trtype": "$TEST_TRANSPORT", 00:19:24.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.718 "adrfam": "ipv4", 00:19:24.718 "trsvcid": "$NVMF_PORT", 00:19:24.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.718 "hdgst": ${hdgst:-false}, 00:19:24.718 "ddgst": ${ddgst:-false} 00:19:24.718 }, 00:19:24.718 "method": "bdev_nvme_attach_controller" 00:19:24.718 } 00:19:24.718 EOF 00:19:24.718 )") 00:19:24.718 15:06:40 -- nvmf/common.sh@543 -- # cat 00:19:24.718 15:06:40 -- nvmf/common.sh@545 -- # jq . 00:19:24.718 15:06:40 -- nvmf/common.sh@546 -- # IFS=, 00:19:24.718 15:06:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:24.718 "params": { 00:19:24.718 "name": "Nvme0", 00:19:24.718 "trtype": "tcp", 00:19:24.718 "traddr": "10.0.0.2", 00:19:24.718 "adrfam": "ipv4", 00:19:24.718 "trsvcid": "4420", 00:19:24.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:24.718 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:24.718 "hdgst": false, 00:19:24.718 "ddgst": false 00:19:24.718 }, 00:19:24.718 "method": "bdev_nvme_attach_controller" 00:19:24.718 }' 00:19:24.718 [2024-04-18 15:06:40.356167] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:19:24.718 [2024-04-18 15:06:40.356249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71433 ] 00:19:24.979 [2024-04-18 15:06:40.498826] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.979 [2024-04-18 15:06:40.600650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.239 Running I/O for 1 seconds... 00:19:26.178 00:19:26.178 Latency(us) 00:19:26.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.178 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:26.178 Verification LBA range: start 0x0 length 0x400 00:19:26.178 Nvme0n1 : 1.00 2172.91 135.81 0.00 0.00 28971.26 4184.83 26951.35 00:19:26.178 =================================================================================================================== 00:19:26.178 Total : 2172.91 135.81 0.00 0.00 28971.26 4184.83 26951.35 00:19:26.437 15:06:42 -- target/host_management.sh@102 -- # stoptarget 00:19:26.437 15:06:42 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:19:26.437 15:06:42 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:19:26.437 15:06:42 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:19:26.437 15:06:42 -- target/host_management.sh@40 -- # nvmftestfini 00:19:26.437 15:06:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:26.437 15:06:42 -- nvmf/common.sh@117 -- # sync 00:19:26.437 15:06:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:26.437 15:06:42 -- nvmf/common.sh@120 -- # set +e 00:19:26.437 15:06:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:26.437 15:06:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:26.437 rmmod nvme_tcp 00:19:26.437 rmmod nvme_fabrics 00:19:26.695 rmmod nvme_keyring 00:19:26.695 15:06:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:26.695 15:06:42 -- nvmf/common.sh@124 -- # set -e 00:19:26.695 15:06:42 -- nvmf/common.sh@125 -- # return 0 00:19:26.695 15:06:42 -- nvmf/common.sh@478 -- # '[' -n 71311 ']' 00:19:26.695 15:06:42 -- nvmf/common.sh@479 -- # killprocess 71311 00:19:26.695 15:06:42 -- common/autotest_common.sh@936 -- # '[' -z 71311 ']' 00:19:26.695 15:06:42 -- common/autotest_common.sh@940 -- # kill -0 71311 00:19:26.695 15:06:42 -- common/autotest_common.sh@941 -- # uname 00:19:26.695 15:06:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:26.695 15:06:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71311 00:19:26.695 killing process with pid 71311 00:19:26.695 15:06:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:26.695 15:06:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:26.695 15:06:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71311' 00:19:26.695 15:06:42 -- common/autotest_common.sh@955 -- # kill 71311 00:19:26.695 15:06:42 -- common/autotest_common.sh@960 -- # wait 71311 00:19:26.953 [2024-04-18 15:06:42.416977] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:19:26.953 15:06:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:26.953 15:06:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:26.953 15:06:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:26.953 15:06:42 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:26.953 15:06:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:26.953 15:06:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.953 15:06:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.953 15:06:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.953 15:06:42 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:26.953 00:19:26.953 real 0m5.423s 00:19:26.953 user 0m22.439s 00:19:26.953 sys 0m1.293s 00:19:26.953 15:06:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:26.953 15:06:42 -- common/autotest_common.sh@10 -- # set +x 00:19:26.953 ************************************ 00:19:26.953 END TEST nvmf_host_management 00:19:26.953 ************************************ 00:19:26.953 15:06:42 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:26.953 00:19:26.953 real 0m6.186s 00:19:26.953 user 0m22.638s 00:19:26.953 sys 0m1.692s 00:19:26.953 15:06:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:26.953 15:06:42 -- common/autotest_common.sh@10 -- # set +x 00:19:26.953 ************************************ 00:19:26.953 END TEST nvmf_host_management 00:19:26.953 ************************************ 00:19:26.953 15:06:42 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:19:26.953 15:06:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:26.953 15:06:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:26.953 15:06:42 -- common/autotest_common.sh@10 -- # set +x 00:19:27.210 ************************************ 00:19:27.210 START TEST nvmf_lvol 00:19:27.210 ************************************ 00:19:27.210 15:06:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:19:27.210 * Looking for test storage... 00:19:27.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:27.210 15:06:42 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:27.210 15:06:42 -- nvmf/common.sh@7 -- # uname -s 00:19:27.210 15:06:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.210 15:06:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.211 15:06:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.211 15:06:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.211 15:06:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.211 15:06:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.211 15:06:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.211 15:06:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.211 15:06:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.211 15:06:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.211 15:06:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:19:27.211 15:06:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:19:27.211 15:06:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.211 15:06:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.211 15:06:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:27.211 15:06:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.211 15:06:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:27.211 15:06:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.211 15:06:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.211 15:06:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.211 15:06:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.211 15:06:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.211 15:06:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.211 15:06:42 -- paths/export.sh@5 -- # export PATH 00:19:27.211 15:06:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.211 15:06:42 -- nvmf/common.sh@47 -- # : 0 00:19:27.211 15:06:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:27.211 15:06:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:27.211 15:06:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.211 15:06:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.211 15:06:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.211 15:06:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:27.211 15:06:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:27.211 15:06:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:27.211 15:06:42 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:27.211 15:06:42 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:27.211 15:06:42 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:19:27.211 15:06:42 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:19:27.211 15:06:42 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:27.211 15:06:42 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:19:27.211 15:06:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:27.211 15:06:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.211 15:06:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:27.211 15:06:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:27.211 15:06:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:27.211 15:06:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.211 15:06:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.211 15:06:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.469 15:06:42 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:27.469 15:06:42 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:27.469 15:06:42 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:27.470 15:06:42 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:27.470 15:06:42 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:27.470 15:06:42 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:27.470 15:06:42 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.470 15:06:42 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:27.470 15:06:42 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:27.470 15:06:42 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:27.470 15:06:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:27.470 15:06:42 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:27.470 15:06:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:27.470 15:06:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.470 15:06:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:27.470 15:06:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:27.470 15:06:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:27.470 15:06:42 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:27.470 15:06:42 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:27.470 15:06:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:27.470 Cannot find device "nvmf_tgt_br" 00:19:27.470 15:06:42 -- nvmf/common.sh@155 -- # true 00:19:27.470 15:06:42 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:27.470 Cannot find device "nvmf_tgt_br2" 00:19:27.470 15:06:42 -- nvmf/common.sh@156 -- # true 00:19:27.470 15:06:42 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:27.470 15:06:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:27.470 Cannot find device "nvmf_tgt_br" 00:19:27.470 15:06:42 -- nvmf/common.sh@158 -- # true 00:19:27.470 15:06:42 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:27.470 Cannot find device "nvmf_tgt_br2" 00:19:27.470 15:06:43 -- nvmf/common.sh@159 -- # true 00:19:27.470 15:06:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:27.470 15:06:43 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:27.470 15:06:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:27.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:27.470 15:06:43 -- nvmf/common.sh@162 -- # true 00:19:27.470 15:06:43 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:27.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:27.470 15:06:43 -- nvmf/common.sh@163 -- # true 00:19:27.470 15:06:43 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:27.470 15:06:43 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:27.470 15:06:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:27.470 15:06:43 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:27.470 15:06:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:27.470 15:06:43 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:27.470 15:06:43 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:27.470 15:06:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:27.728 15:06:43 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:27.728 15:06:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:27.728 15:06:43 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:27.728 15:06:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:27.728 15:06:43 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:27.728 15:06:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:27.728 15:06:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:27.728 15:06:43 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:27.728 15:06:43 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:27.728 15:06:43 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:27.728 15:06:43 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:27.728 15:06:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:27.728 15:06:43 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:27.728 15:06:43 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:27.728 15:06:43 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:27.728 15:06:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:27.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:19:27.728 00:19:27.728 --- 10.0.0.2 ping statistics --- 00:19:27.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.728 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:19:27.728 15:06:43 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:27.728 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:27.728 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.141 ms 00:19:27.728 00:19:27.728 --- 10.0.0.3 ping statistics --- 00:19:27.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.728 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:19:27.728 15:06:43 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:27.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:19:27.728 00:19:27.728 --- 10.0.0.1 ping statistics --- 00:19:27.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.728 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:27.728 15:06:43 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.728 15:06:43 -- nvmf/common.sh@422 -- # return 0 00:19:27.728 15:06:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:27.728 15:06:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.728 15:06:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:27.728 15:06:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:27.728 15:06:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.728 15:06:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:27.728 15:06:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:27.728 15:06:43 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:19:27.728 15:06:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:27.728 15:06:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:27.728 15:06:43 -- common/autotest_common.sh@10 -- # set +x 00:19:27.728 15:06:43 -- nvmf/common.sh@470 -- # nvmfpid=71668 00:19:27.728 15:06:43 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:27.728 15:06:43 -- nvmf/common.sh@471 -- # waitforlisten 71668 00:19:27.728 15:06:43 -- common/autotest_common.sh@817 -- # '[' -z 71668 ']' 00:19:27.728 15:06:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.728 15:06:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:27.728 15:06:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.728 15:06:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:27.728 15:06:43 -- common/autotest_common.sh@10 -- # set +x 00:19:27.986 [2024-04-18 15:06:43.456547] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:19:27.986 [2024-04-18 15:06:43.457175] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.986 [2024-04-18 15:06:43.601788] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:28.245 [2024-04-18 15:06:43.704906] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.245 [2024-04-18 15:06:43.704987] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.245 [2024-04-18 15:06:43.704998] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.245 [2024-04-18 15:06:43.705008] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.245 [2024-04-18 15:06:43.705016] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.245 [2024-04-18 15:06:43.705171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.245 [2024-04-18 15:06:43.705294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.245 [2024-04-18 15:06:43.705297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.812 15:06:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:28.812 15:06:44 -- common/autotest_common.sh@850 -- # return 0 00:19:28.812 15:06:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:28.812 15:06:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:28.812 15:06:44 -- common/autotest_common.sh@10 -- # set +x 00:19:28.812 15:06:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.812 15:06:44 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:29.070 [2024-04-18 15:06:44.661333] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.070 15:06:44 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:29.328 15:06:44 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:19:29.328 15:06:44 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:29.587 15:06:45 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:19:29.587 15:06:45 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:19:29.847 15:06:45 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:19:30.106 15:06:45 -- target/nvmf_lvol.sh@29 -- # lvs=3f65df63-86c4-4d21-8dc1-42de0c4eea4c 00:19:30.106 15:06:45 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3f65df63-86c4-4d21-8dc1-42de0c4eea4c lvol 20 00:19:30.365 15:06:45 -- target/nvmf_lvol.sh@32 -- # lvol=1fc2f0bf-3d75-47bf-b680-4d99f121c71c 00:19:30.365 15:06:45 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:30.624 15:06:46 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1fc2f0bf-3d75-47bf-b680-4d99f121c71c 00:19:30.884 15:06:46 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:30.884 [2024-04-18 15:06:46.583565] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.143 15:06:46 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:31.402 15:06:46 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:19:31.402 15:06:46 -- target/nvmf_lvol.sh@42 -- # perf_pid=71816 00:19:31.402 15:06:46 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:19:32.390 15:06:47 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 1fc2f0bf-3d75-47bf-b680-4d99f121c71c MY_SNAPSHOT 00:19:32.648 15:06:48 -- target/nvmf_lvol.sh@47 -- # snapshot=44f1a7b8-da36-48ca-89a5-6ed7dd635c3b 00:19:32.648 15:06:48 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 1fc2f0bf-3d75-47bf-b680-4d99f121c71c 30 00:19:32.906 15:06:48 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 44f1a7b8-da36-48ca-89a5-6ed7dd635c3b MY_CLONE 00:19:33.164 15:06:48 -- target/nvmf_lvol.sh@49 -- # clone=6f9ac2b1-91fa-44a7-ac43-5d03c34ee0eb 00:19:33.164 15:06:48 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 6f9ac2b1-91fa-44a7-ac43-5d03c34ee0eb 00:19:33.731 15:06:49 -- target/nvmf_lvol.sh@53 -- # wait 71816 00:19:41.848 Initializing NVMe Controllers 00:19:41.848 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:19:41.848 Controller IO queue size 128, less than required. 00:19:41.848 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:41.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:19:41.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:19:41.848 Initialization complete. Launching workers. 00:19:41.848 ======================================================== 00:19:41.848 Latency(us) 00:19:41.848 Device Information : IOPS MiB/s Average min max 00:19:41.848 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11959.20 46.72 10705.27 1234.83 114678.99 00:19:41.848 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11902.10 46.49 10758.05 2765.74 46549.35 00:19:41.848 ======================================================== 00:19:41.848 Total : 23861.30 93.21 10731.59 1234.83 114678.99 00:19:41.848 00:19:41.848 15:06:57 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:41.848 15:06:57 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1fc2f0bf-3d75-47bf-b680-4d99f121c71c 00:19:41.848 15:06:57 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3f65df63-86c4-4d21-8dc1-42de0c4eea4c 00:19:42.106 15:06:57 -- target/nvmf_lvol.sh@60 -- # rm -f 00:19:42.106 15:06:57 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:19:42.106 15:06:57 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:19:42.106 15:06:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:42.106 15:06:57 -- nvmf/common.sh@117 -- # sync 00:19:42.106 15:06:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:42.106 15:06:57 -- nvmf/common.sh@120 -- # set +e 00:19:42.106 15:06:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:42.106 15:06:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:42.106 rmmod nvme_tcp 00:19:42.365 rmmod nvme_fabrics 00:19:42.365 rmmod nvme_keyring 00:19:42.365 15:06:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:42.365 15:06:57 -- nvmf/common.sh@124 -- # set -e 00:19:42.366 15:06:57 -- nvmf/common.sh@125 -- # return 0 00:19:42.366 15:06:57 -- nvmf/common.sh@478 -- # '[' -n 71668 ']' 00:19:42.366 15:06:57 -- nvmf/common.sh@479 -- # killprocess 71668 00:19:42.366 15:06:57 -- common/autotest_common.sh@936 -- # '[' -z 71668 ']' 00:19:42.366 15:06:57 -- common/autotest_common.sh@940 -- # kill -0 71668 00:19:42.366 15:06:57 -- common/autotest_common.sh@941 -- # uname 00:19:42.366 15:06:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:42.366 15:06:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71668 00:19:42.366 15:06:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:42.366 killing process with pid 71668 00:19:42.366 15:06:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:42.366 15:06:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71668' 00:19:42.366 15:06:57 -- common/autotest_common.sh@955 -- # kill 71668 00:19:42.366 15:06:57 -- common/autotest_common.sh@960 -- # wait 71668 00:19:42.624 15:06:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:42.624 15:06:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:42.624 15:06:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:42.624 15:06:58 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:42.624 15:06:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:42.624 15:06:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.624 15:06:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.624 15:06:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.624 15:06:58 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:42.624 00:19:42.624 real 0m15.487s 00:19:42.624 user 1m2.663s 00:19:42.624 sys 0m5.271s 00:19:42.624 15:06:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:42.624 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:19:42.624 ************************************ 00:19:42.624 END TEST nvmf_lvol 00:19:42.624 ************************************ 00:19:42.624 15:06:58 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:42.624 15:06:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:42.624 15:06:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:42.624 15:06:58 -- common/autotest_common.sh@10 -- # set +x 00:19:42.883 ************************************ 00:19:42.883 START TEST nvmf_lvs_grow 00:19:42.883 ************************************ 00:19:42.883 15:06:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:19:42.883 * Looking for test storage... 00:19:42.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:42.883 15:06:58 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:42.883 15:06:58 -- nvmf/common.sh@7 -- # uname -s 00:19:42.883 15:06:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.883 15:06:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.883 15:06:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.883 15:06:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.883 15:06:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.883 15:06:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.883 15:06:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.883 15:06:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.883 15:06:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.883 15:06:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.883 15:06:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:19:42.883 15:06:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:19:42.883 15:06:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.883 15:06:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.883 15:06:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:42.883 15:06:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.883 15:06:58 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:42.883 15:06:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.883 15:06:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.883 15:06:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.883 15:06:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.883 15:06:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.883 15:06:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.883 15:06:58 -- paths/export.sh@5 -- # export PATH 00:19:42.883 15:06:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.883 15:06:58 -- nvmf/common.sh@47 -- # : 0 00:19:42.883 15:06:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:42.883 15:06:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:42.883 15:06:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.883 15:06:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.883 15:06:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.883 15:06:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:42.883 15:06:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:42.883 15:06:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:42.883 15:06:58 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:42.883 15:06:58 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.883 15:06:58 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:19:42.883 15:06:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:42.883 15:06:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.883 15:06:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:42.883 15:06:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:42.883 15:06:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:42.883 15:06:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.883 15:06:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.883 15:06:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.883 15:06:58 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:42.883 15:06:58 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:42.883 15:06:58 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:42.883 15:06:58 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:42.883 15:06:58 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:42.883 15:06:58 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:42.883 15:06:58 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.883 15:06:58 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.883 15:06:58 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:42.883 15:06:58 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:42.883 15:06:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:42.883 15:06:58 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:42.883 15:06:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:42.883 15:06:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.883 15:06:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:42.883 15:06:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:42.883 15:06:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:42.883 15:06:58 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:42.883 15:06:58 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:43.142 15:06:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:43.142 Cannot find device "nvmf_tgt_br" 00:19:43.142 15:06:58 -- nvmf/common.sh@155 -- # true 00:19:43.142 15:06:58 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:43.142 Cannot find device "nvmf_tgt_br2" 00:19:43.142 15:06:58 -- nvmf/common.sh@156 -- # true 00:19:43.142 15:06:58 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:43.142 15:06:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:43.142 Cannot find device "nvmf_tgt_br" 00:19:43.142 15:06:58 -- nvmf/common.sh@158 -- # true 00:19:43.142 15:06:58 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:43.142 Cannot find device "nvmf_tgt_br2" 00:19:43.142 15:06:58 -- nvmf/common.sh@159 -- # true 00:19:43.142 15:06:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:43.142 15:06:58 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:43.142 15:06:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:43.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.142 15:06:58 -- nvmf/common.sh@162 -- # true 00:19:43.142 15:06:58 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:43.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.142 15:06:58 -- nvmf/common.sh@163 -- # true 00:19:43.142 15:06:58 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:43.142 15:06:58 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:43.142 15:06:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:43.142 15:06:58 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:43.142 15:06:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:43.142 15:06:58 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:43.142 15:06:58 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:43.142 15:06:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:43.142 15:06:58 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:43.142 15:06:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:43.429 15:06:58 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:43.429 15:06:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:43.429 15:06:58 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:43.429 15:06:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:43.429 15:06:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:43.429 15:06:58 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:43.429 15:06:58 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:43.429 15:06:58 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:43.429 15:06:58 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:43.429 15:06:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:43.429 15:06:58 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:43.429 15:06:58 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:43.429 15:06:58 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:43.429 15:06:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:43.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:19:43.429 00:19:43.429 --- 10.0.0.2 ping statistics --- 00:19:43.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.429 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:19:43.429 15:06:58 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:43.429 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:43.429 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:19:43.429 00:19:43.429 --- 10.0.0.3 ping statistics --- 00:19:43.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.429 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:43.429 15:06:58 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:43.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:19:43.429 00:19:43.429 --- 10.0.0.1 ping statistics --- 00:19:43.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.429 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:43.429 15:06:58 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.429 15:06:58 -- nvmf/common.sh@422 -- # return 0 00:19:43.429 15:06:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:43.429 15:06:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.429 15:06:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:43.429 15:06:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:43.429 15:06:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.429 15:06:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:43.429 15:06:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:43.429 15:06:59 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:19:43.429 15:06:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:43.429 15:06:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:43.429 15:06:59 -- common/autotest_common.sh@10 -- # set +x 00:19:43.429 15:06:59 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:43.429 15:06:59 -- nvmf/common.sh@470 -- # nvmfpid=72187 00:19:43.429 15:06:59 -- nvmf/common.sh@471 -- # waitforlisten 72187 00:19:43.429 15:06:59 -- common/autotest_common.sh@817 -- # '[' -z 72187 ']' 00:19:43.429 15:06:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.429 15:06:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:43.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.429 15:06:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.429 15:06:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:43.429 15:06:59 -- common/autotest_common.sh@10 -- # set +x 00:19:43.429 [2024-04-18 15:06:59.069060] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:19:43.429 [2024-04-18 15:06:59.069144] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.688 [2024-04-18 15:06:59.195903] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.688 [2024-04-18 15:06:59.284027] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.688 [2024-04-18 15:06:59.284091] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.688 [2024-04-18 15:06:59.284101] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.688 [2024-04-18 15:06:59.284110] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.688 [2024-04-18 15:06:59.284117] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.688 [2024-04-18 15:06:59.284157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.254 15:06:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:44.254 15:06:59 -- common/autotest_common.sh@850 -- # return 0 00:19:44.254 15:06:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:44.254 15:06:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:44.254 15:06:59 -- common/autotest_common.sh@10 -- # set +x 00:19:44.513 15:06:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.513 15:06:59 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:44.513 [2024-04-18 15:07:00.156692] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.513 15:07:00 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:19:44.513 15:07:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:44.513 15:07:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:44.513 15:07:00 -- common/autotest_common.sh@10 -- # set +x 00:19:44.771 ************************************ 00:19:44.771 START TEST lvs_grow_clean 00:19:44.771 ************************************ 00:19:44.771 15:07:00 -- common/autotest_common.sh@1111 -- # lvs_grow 00:19:44.771 15:07:00 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:19:44.771 15:07:00 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:19:44.771 15:07:00 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:19:44.771 15:07:00 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:19:44.771 15:07:00 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:19:44.771 15:07:00 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:19:44.771 15:07:00 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:44.771 15:07:00 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:44.771 15:07:00 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:45.029 15:07:00 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:19:45.029 15:07:00 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:19:45.287 15:07:00 -- target/nvmf_lvs_grow.sh@28 -- # lvs=ffc548f9-0140-40b1-ad30-b05d8df3a5ed 00:19:45.287 15:07:00 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffc548f9-0140-40b1-ad30-b05d8df3a5ed 00:19:45.287 15:07:00 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:19:45.287 15:07:00 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:19:45.287 15:07:00 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:19:45.287 15:07:00 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ffc548f9-0140-40b1-ad30-b05d8df3a5ed lvol 150 00:19:45.545 15:07:01 -- target/nvmf_lvs_grow.sh@33 -- # lvol=0acf4913-e7b3-434a-919f-62446a2696b0 00:19:45.545 15:07:01 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:45.545 15:07:01 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:19:45.803 [2024-04-18 15:07:01.346752] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:19:45.803 [2024-04-18 15:07:01.346835] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:19:45.803 true 00:19:45.803 15:07:01 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffc548f9-0140-40b1-ad30-b05d8df3a5ed 00:19:45.803 15:07:01 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:19:46.062 15:07:01 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:19:46.062 15:07:01 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:19:46.322 15:07:01 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0acf4913-e7b3-434a-919f-62446a2696b0 00:19:46.322 15:07:01 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:46.581 [2024-04-18 15:07:02.125990] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.581 15:07:02 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:46.841 15:07:02 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72347 00:19:46.841 15:07:02 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:19:46.841 15:07:02 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:46.841 15:07:02 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72347 /var/tmp/bdevperf.sock 00:19:46.841 15:07:02 -- common/autotest_common.sh@817 -- # '[' -z 72347 ']' 00:19:46.841 15:07:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.841 15:07:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:46.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.841 15:07:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.841 15:07:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:46.841 15:07:02 -- common/autotest_common.sh@10 -- # set +x 00:19:46.841 [2024-04-18 15:07:02.408909] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:19:46.841 [2024-04-18 15:07:02.408997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72347 ] 00:19:47.100 [2024-04-18 15:07:02.550334] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.100 [2024-04-18 15:07:02.637023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.718 15:07:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:47.718 15:07:03 -- common/autotest_common.sh@850 -- # return 0 00:19:47.718 15:07:03 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:47.977 Nvme0n1 00:19:47.977 15:07:03 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:48.236 [ 00:19:48.236 { 00:19:48.236 "aliases": [ 00:19:48.236 "0acf4913-e7b3-434a-919f-62446a2696b0" 00:19:48.236 ], 00:19:48.236 "assigned_rate_limits": { 00:19:48.236 "r_mbytes_per_sec": 0, 00:19:48.236 "rw_ios_per_sec": 0, 00:19:48.236 "rw_mbytes_per_sec": 0, 00:19:48.236 "w_mbytes_per_sec": 0 00:19:48.236 }, 00:19:48.236 "block_size": 4096, 00:19:48.236 "claimed": false, 00:19:48.236 "driver_specific": { 00:19:48.236 "mp_policy": "active_passive", 00:19:48.236 "nvme": [ 00:19:48.236 { 00:19:48.236 "ctrlr_data": { 00:19:48.236 "ana_reporting": false, 00:19:48.236 "cntlid": 1, 00:19:48.236 "firmware_revision": "24.05", 00:19:48.236 "model_number": "SPDK bdev Controller", 00:19:48.236 "multi_ctrlr": true, 00:19:48.236 "oacs": { 00:19:48.236 "firmware": 0, 00:19:48.236 "format": 0, 00:19:48.236 "ns_manage": 0, 00:19:48.236 "security": 0 00:19:48.236 }, 00:19:48.236 "serial_number": "SPDK0", 00:19:48.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:48.236 "vendor_id": "0x8086" 00:19:48.236 }, 00:19:48.236 "ns_data": { 00:19:48.236 "can_share": true, 00:19:48.236 "id": 1 00:19:48.236 }, 00:19:48.236 "trid": { 00:19:48.236 "adrfam": "IPv4", 00:19:48.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:48.236 "traddr": "10.0.0.2", 00:19:48.236 "trsvcid": "4420", 00:19:48.236 "trtype": "TCP" 00:19:48.236 }, 00:19:48.236 "vs": { 00:19:48.236 "nvme_version": "1.3" 00:19:48.236 } 00:19:48.236 } 00:19:48.236 ] 00:19:48.236 }, 00:19:48.236 "memory_domains": [ 00:19:48.236 { 00:19:48.236 "dma_device_id": "system", 00:19:48.236 "dma_device_type": 1 00:19:48.236 } 00:19:48.236 ], 00:19:48.236 "name": "Nvme0n1", 00:19:48.236 "num_blocks": 38912, 00:19:48.236 "product_name": "NVMe disk", 00:19:48.236 "supported_io_types": { 00:19:48.236 "abort": true, 00:19:48.236 "compare": true, 00:19:48.236 "compare_and_write": true, 00:19:48.236 "flush": true, 00:19:48.236 "nvme_admin": true, 00:19:48.236 "nvme_io": true, 00:19:48.236 "read": true, 00:19:48.236 "reset": true, 00:19:48.236 "unmap": true, 00:19:48.236 "write": true, 00:19:48.236 "write_zeroes": true 00:19:48.236 }, 00:19:48.236 "uuid": "0acf4913-e7b3-434a-919f-62446a2696b0", 00:19:48.236 "zoned": false 00:19:48.236 } 00:19:48.236 ] 00:19:48.236 15:07:03 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72389 00:19:48.236 15:07:03 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:48.236 15:07:03 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:48.236 Running I/O for 10 seconds... 00:19:49.613 Latency(us) 00:19:49.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:49.613 Nvme0n1 : 1.00 10496.00 41.00 0.00 0.00 0.00 0.00 0.00 00:19:49.613 =================================================================================================================== 00:19:49.613 Total : 10496.00 41.00 0.00 0.00 0.00 0.00 0.00 00:19:49.613 00:19:50.181 15:07:05 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ffc548f9-0140-40b1-ad30-b05d8df3a5ed 00:19:50.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:50.439 Nvme0n1 : 2.00 10790.00 42.15 0.00 0.00 0.00 0.00 0.00 00:19:50.439 =================================================================================================================== 00:19:50.439 Total : 10790.00 42.15 0.00 0.00 0.00 0.00 0.00 00:19:50.439 00:19:50.439 true 00:19:50.439 15:07:06 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffc548f9-0140-40b1-ad30-b05d8df3a5ed 00:19:50.439 15:07:06 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:50.698 15:07:06 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:50.698 15:07:06 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:50.698 15:07:06 -- target/nvmf_lvs_grow.sh@65 -- # wait 72389 00:19:51.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:51.266 Nvme0n1 : 3.00 10810.33 42.23 0.00 0.00 0.00 0.00 0.00 00:19:51.266 =================================================================================================================== 00:19:51.266 Total : 10810.33 42.23 0.00 0.00 0.00 0.00 0.00 00:19:51.266 00:19:52.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:52.203 Nvme0n1 : 4.00 10795.50 42.17 0.00 0.00 0.00 0.00 0.00 00:19:52.203 =================================================================================================================== 00:19:52.203 Total : 10795.50 42.17 0.00 0.00 0.00 0.00 0.00 00:19:52.203 00:19:53.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:53.583 Nvme0n1 : 5.00 10779.20 42.11 0.00 0.00 0.00 0.00 0.00 00:19:53.583 =================================================================================================================== 00:19:53.583 Total : 10779.20 42.11 0.00 0.00 0.00 0.00 0.00 00:19:53.583 00:19:54.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:54.520 Nvme0n1 : 6.00 10682.33 41.73 0.00 0.00 0.00 0.00 0.00 00:19:54.520 =================================================================================================================== 00:19:54.520 Total : 10682.33 41.73 0.00 0.00 0.00 0.00 0.00 00:19:54.520 00:19:55.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:55.456 Nvme0n1 : 7.00 10627.71 41.51 0.00 0.00 0.00 0.00 0.00 00:19:55.456 =================================================================================================================== 00:19:55.456 Total : 10627.71 41.51 0.00 0.00 0.00 0.00 0.00 00:19:55.456 00:19:56.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:56.393 Nvme0n1 : 8.00 10561.12 41.25 0.00 0.00 0.00 0.00 0.00 00:19:56.393 =================================================================================================================== 00:19:56.393 Total : 10561.12 41.25 0.00 0.00 0.00 0.00 0.00 00:19:56.393 00:19:57.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:57.341 Nvme0n1 : 9.00 10517.00 41.08 0.00 0.00 0.00 0.00 0.00 00:19:57.341 =================================================================================================================== 00:19:57.341 Total : 10517.00 41.08 0.00 0.00 0.00 0.00 0.00 00:19:57.341 00:19:58.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:58.289 Nvme0n1 : 10.00 10454.30 40.84 0.00 0.00 0.00 0.00 0.00 00:19:58.289 =================================================================================================================== 00:19:58.289 Total : 10454.30 40.84 0.00 0.00 0.00 0.00 0.00 00:19:58.289 00:19:58.289 00:19:58.289 Latency(us) 00:19:58.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:58.289 Nvme0n1 : 10.01 10459.27 40.86 0.00 0.00 12234.17 5527.13 26530.24 00:19:58.289 =================================================================================================================== 00:19:58.289 Total : 10459.27 40.86 0.00 0.00 12234.17 5527.13 26530.24 00:19:58.289 0 00:19:58.289 15:07:13 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72347 00:19:58.289 15:07:13 -- common/autotest_common.sh@936 -- # '[' -z 72347 ']' 00:19:58.289 15:07:13 -- common/autotest_common.sh@940 -- # kill -0 72347 00:19:58.289 15:07:13 -- common/autotest_common.sh@941 -- # uname 00:19:58.289 15:07:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:58.289 15:07:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72347 00:19:58.289 15:07:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:58.289 15:07:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:58.289 15:07:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72347' 00:19:58.289 killing process with pid 72347 00:19:58.289 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.289 00:19:58.289 Latency(us) 00:19:58.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.289 =================================================================================================================== 00:19:58.289 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:58.289 15:07:13 -- common/autotest_common.sh@955 -- # kill 72347 00:19:58.289 15:07:13 -- common/autotest_common.sh@960 -- # wait 72347 00:19:58.547 15:07:14 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:58.806 15:07:14 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffc548f9-0140-40b1-ad30-b05d8df3a5ed 00:19:58.806 15:07:14 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:59.065 15:07:14 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:59.065 15:07:14 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:19:59.065 15:07:14 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:59.322 [2024-04-18 15:07:14.875143] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:59.322 15:07:14 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffc548f9-0140-40b1-ad30-b05d8df3a5ed 00:19:59.322 15:07:14 -- common/autotest_common.sh@638 -- # local es=0 00:19:59.322 15:07:14 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffc548f9-0140-40b1-ad30-b05d8df3a5ed 00:19:59.322 15:07:14 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.322 15:07:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:59.322 15:07:14 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.322 15:07:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:59.322 15:07:14 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.322 15:07:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:59.322 15:07:14 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.322 15:07:14 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:59.322 15:07:14 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffc548f9-0140-40b1-ad30-b05d8df3a5ed 00:19:59.580 2024/04/18 15:07:15 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:ffc548f9-0140-40b1-ad30-b05d8df3a5ed], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:19:59.580 request: 00:19:59.580 { 00:19:59.580 "method": "bdev_lvol_get_lvstores", 00:19:59.580 "params": { 00:19:59.580 "uuid": "ffc548f9-0140-40b1-ad30-b05d8df3a5ed" 00:19:59.580 } 00:19:59.580 } 00:19:59.580 Got JSON-RPC error response 00:19:59.580 GoRPCClient: error on JSON-RPC call 00:19:59.580 15:07:15 -- common/autotest_common.sh@641 -- # es=1 00:19:59.580 15:07:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:59.580 15:07:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:59.580 15:07:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:59.580 15:07:15 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:59.838 aio_bdev 00:19:59.838 15:07:15 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 0acf4913-e7b3-434a-919f-62446a2696b0 00:19:59.838 15:07:15 -- common/autotest_common.sh@885 -- # local bdev_name=0acf4913-e7b3-434a-919f-62446a2696b0 00:19:59.838 15:07:15 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:59.838 15:07:15 -- common/autotest_common.sh@887 -- # local i 00:19:59.838 15:07:15 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:59.838 15:07:15 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:59.838 15:07:15 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:00.095 15:07:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0acf4913-e7b3-434a-919f-62446a2696b0 -t 2000 00:20:00.095 [ 00:20:00.095 { 00:20:00.095 "aliases": [ 00:20:00.095 "lvs/lvol" 00:20:00.095 ], 00:20:00.095 "assigned_rate_limits": { 00:20:00.095 "r_mbytes_per_sec": 0, 00:20:00.095 "rw_ios_per_sec": 0, 00:20:00.095 "rw_mbytes_per_sec": 0, 00:20:00.095 "w_mbytes_per_sec": 0 00:20:00.095 }, 00:20:00.095 "block_size": 4096, 00:20:00.095 "claimed": false, 00:20:00.095 "driver_specific": { 00:20:00.095 "lvol": { 00:20:00.095 "base_bdev": "aio_bdev", 00:20:00.095 "clone": false, 00:20:00.095 "esnap_clone": false, 00:20:00.095 "lvol_store_uuid": "ffc548f9-0140-40b1-ad30-b05d8df3a5ed", 00:20:00.095 "snapshot": false, 00:20:00.095 "thin_provision": false 00:20:00.095 } 00:20:00.095 }, 00:20:00.095 "name": "0acf4913-e7b3-434a-919f-62446a2696b0", 00:20:00.095 "num_blocks": 38912, 00:20:00.095 "product_name": "Logical Volume", 00:20:00.095 "supported_io_types": { 00:20:00.095 "abort": false, 00:20:00.095 "compare": false, 00:20:00.095 "compare_and_write": false, 00:20:00.095 "flush": false, 00:20:00.095 "nvme_admin": false, 00:20:00.095 "nvme_io": false, 00:20:00.095 "read": true, 00:20:00.095 "reset": true, 00:20:00.095 "unmap": true, 00:20:00.095 "write": true, 00:20:00.095 "write_zeroes": true 00:20:00.095 }, 00:20:00.095 "uuid": "0acf4913-e7b3-434a-919f-62446a2696b0", 00:20:00.095 "zoned": false 00:20:00.096 } 00:20:00.096 ] 00:20:00.353 15:07:15 -- common/autotest_common.sh@893 -- # return 0 00:20:00.353 15:07:15 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffc548f9-0140-40b1-ad30-b05d8df3a5ed 00:20:00.353 15:07:15 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:20:00.611 15:07:16 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:20:00.611 15:07:16 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ffc548f9-0140-40b1-ad30-b05d8df3a5ed 00:20:00.611 15:07:16 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:20:00.611 15:07:16 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:20:00.611 15:07:16 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0acf4913-e7b3-434a-919f-62446a2696b0 00:20:00.869 15:07:16 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ffc548f9-0140-40b1-ad30-b05d8df3a5ed 00:20:01.127 15:07:16 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:01.384 15:07:16 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:01.949 ************************************ 00:20:01.949 END TEST lvs_grow_clean 00:20:01.949 ************************************ 00:20:01.949 00:20:01.949 real 0m17.097s 00:20:01.949 user 0m15.508s 00:20:01.949 sys 0m2.830s 00:20:01.949 15:07:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:01.949 15:07:17 -- common/autotest_common.sh@10 -- # set +x 00:20:01.949 15:07:17 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:20:01.949 15:07:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:01.949 15:07:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:01.949 15:07:17 -- common/autotest_common.sh@10 -- # set +x 00:20:01.949 ************************************ 00:20:01.949 START TEST lvs_grow_dirty 00:20:01.949 ************************************ 00:20:01.949 15:07:17 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:20:01.949 15:07:17 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:20:01.949 15:07:17 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:20:01.949 15:07:17 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:20:01.949 15:07:17 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:20:01.949 15:07:17 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:20:01.950 15:07:17 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:20:01.950 15:07:17 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:01.950 15:07:17 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:01.950 15:07:17 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:02.207 15:07:17 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:20:02.207 15:07:17 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:20:02.529 15:07:18 -- target/nvmf_lvs_grow.sh@28 -- # lvs=45fe3258-db23-4d42-a90e-aaa8c37b523b 00:20:02.529 15:07:18 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45fe3258-db23-4d42-a90e-aaa8c37b523b 00:20:02.529 15:07:18 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:20:02.788 15:07:18 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:20:02.788 15:07:18 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:20:02.788 15:07:18 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 45fe3258-db23-4d42-a90e-aaa8c37b523b lvol 150 00:20:03.046 15:07:18 -- target/nvmf_lvs_grow.sh@33 -- # lvol=9faa2a63-6c79-4313-9279-be6c27018641 00:20:03.046 15:07:18 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:03.046 15:07:18 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:20:03.046 [2024-04-18 15:07:18.733614] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:20:03.046 [2024-04-18 15:07:18.733697] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:20:03.046 true 00:20:03.304 15:07:18 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45fe3258-db23-4d42-a90e-aaa8c37b523b 00:20:03.304 15:07:18 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:20:03.304 15:07:18 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:20:03.304 15:07:18 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:03.561 15:07:19 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9faa2a63-6c79-4313-9279-be6c27018641 00:20:03.819 15:07:19 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:04.078 15:07:19 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:04.338 15:07:19 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:20:04.338 15:07:19 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72778 00:20:04.338 15:07:19 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:04.338 15:07:19 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72778 /var/tmp/bdevperf.sock 00:20:04.338 15:07:19 -- common/autotest_common.sh@817 -- # '[' -z 72778 ']' 00:20:04.338 15:07:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.338 15:07:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:04.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.338 15:07:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.338 15:07:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:04.338 15:07:19 -- common/autotest_common.sh@10 -- # set +x 00:20:04.338 [2024-04-18 15:07:19.848328] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:04.338 [2024-04-18 15:07:19.848414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72778 ] 00:20:04.338 [2024-04-18 15:07:19.993577] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.597 [2024-04-18 15:07:20.098831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.164 15:07:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:05.164 15:07:20 -- common/autotest_common.sh@850 -- # return 0 00:20:05.164 15:07:20 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:20:05.422 Nvme0n1 00:20:05.423 15:07:21 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:20:05.681 [ 00:20:05.681 { 00:20:05.681 "aliases": [ 00:20:05.681 "9faa2a63-6c79-4313-9279-be6c27018641" 00:20:05.681 ], 00:20:05.681 "assigned_rate_limits": { 00:20:05.681 "r_mbytes_per_sec": 0, 00:20:05.681 "rw_ios_per_sec": 0, 00:20:05.681 "rw_mbytes_per_sec": 0, 00:20:05.681 "w_mbytes_per_sec": 0 00:20:05.681 }, 00:20:05.681 "block_size": 4096, 00:20:05.681 "claimed": false, 00:20:05.681 "driver_specific": { 00:20:05.681 "mp_policy": "active_passive", 00:20:05.681 "nvme": [ 00:20:05.681 { 00:20:05.681 "ctrlr_data": { 00:20:05.681 "ana_reporting": false, 00:20:05.681 "cntlid": 1, 00:20:05.681 "firmware_revision": "24.05", 00:20:05.681 "model_number": "SPDK bdev Controller", 00:20:05.681 "multi_ctrlr": true, 00:20:05.681 "oacs": { 00:20:05.681 "firmware": 0, 00:20:05.681 "format": 0, 00:20:05.681 "ns_manage": 0, 00:20:05.681 "security": 0 00:20:05.681 }, 00:20:05.681 "serial_number": "SPDK0", 00:20:05.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:05.681 "vendor_id": "0x8086" 00:20:05.681 }, 00:20:05.681 "ns_data": { 00:20:05.681 "can_share": true, 00:20:05.681 "id": 1 00:20:05.681 }, 00:20:05.681 "trid": { 00:20:05.681 "adrfam": "IPv4", 00:20:05.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:05.681 "traddr": "10.0.0.2", 00:20:05.681 "trsvcid": "4420", 00:20:05.681 "trtype": "TCP" 00:20:05.681 }, 00:20:05.681 "vs": { 00:20:05.681 "nvme_version": "1.3" 00:20:05.681 } 00:20:05.681 } 00:20:05.681 ] 00:20:05.681 }, 00:20:05.681 "memory_domains": [ 00:20:05.681 { 00:20:05.681 "dma_device_id": "system", 00:20:05.681 "dma_device_type": 1 00:20:05.681 } 00:20:05.681 ], 00:20:05.681 "name": "Nvme0n1", 00:20:05.681 "num_blocks": 38912, 00:20:05.681 "product_name": "NVMe disk", 00:20:05.681 "supported_io_types": { 00:20:05.681 "abort": true, 00:20:05.681 "compare": true, 00:20:05.681 "compare_and_write": true, 00:20:05.681 "flush": true, 00:20:05.681 "nvme_admin": true, 00:20:05.681 "nvme_io": true, 00:20:05.681 "read": true, 00:20:05.681 "reset": true, 00:20:05.682 "unmap": true, 00:20:05.682 "write": true, 00:20:05.682 "write_zeroes": true 00:20:05.682 }, 00:20:05.682 "uuid": "9faa2a63-6c79-4313-9279-be6c27018641", 00:20:05.682 "zoned": false 00:20:05.682 } 00:20:05.682 ] 00:20:05.682 15:07:21 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72821 00:20:05.682 15:07:21 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:05.682 15:07:21 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:20:05.682 Running I/O for 10 seconds... 00:20:06.617 Latency(us) 00:20:06.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:06.617 Nvme0n1 : 1.00 11064.00 43.22 0.00 0.00 0.00 0.00 0.00 00:20:06.617 =================================================================================================================== 00:20:06.617 Total : 11064.00 43.22 0.00 0.00 0.00 0.00 0.00 00:20:06.617 00:20:07.555 15:07:23 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 45fe3258-db23-4d42-a90e-aaa8c37b523b 00:20:07.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:07.814 Nvme0n1 : 2.00 10903.50 42.59 0.00 0.00 0.00 0.00 0.00 00:20:07.814 =================================================================================================================== 00:20:07.814 Total : 10903.50 42.59 0.00 0.00 0.00 0.00 0.00 00:20:07.814 00:20:08.073 true 00:20:08.073 15:07:23 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45fe3258-db23-4d42-a90e-aaa8c37b523b 00:20:08.073 15:07:23 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:20:08.332 15:07:23 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:20:08.332 15:07:23 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:20:08.332 15:07:23 -- target/nvmf_lvs_grow.sh@65 -- # wait 72821 00:20:08.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:08.899 Nvme0n1 : 3.00 10828.00 42.30 0.00 0.00 0.00 0.00 0.00 00:20:08.899 =================================================================================================================== 00:20:08.899 Total : 10828.00 42.30 0.00 0.00 0.00 0.00 0.00 00:20:08.899 00:20:09.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:09.834 Nvme0n1 : 4.00 10743.50 41.97 0.00 0.00 0.00 0.00 0.00 00:20:09.834 =================================================================================================================== 00:20:09.834 Total : 10743.50 41.97 0.00 0.00 0.00 0.00 0.00 00:20:09.834 00:20:10.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:10.820 Nvme0n1 : 5.00 10617.40 41.47 0.00 0.00 0.00 0.00 0.00 00:20:10.820 =================================================================================================================== 00:20:10.820 Total : 10617.40 41.47 0.00 0.00 0.00 0.00 0.00 00:20:10.820 00:20:11.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:11.755 Nvme0n1 : 6.00 10559.17 41.25 0.00 0.00 0.00 0.00 0.00 00:20:11.755 =================================================================================================================== 00:20:11.755 Total : 10559.17 41.25 0.00 0.00 0.00 0.00 0.00 00:20:11.755 00:20:12.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:12.689 Nvme0n1 : 7.00 9898.43 38.67 0.00 0.00 0.00 0.00 0.00 00:20:12.689 =================================================================================================================== 00:20:12.689 Total : 9898.43 38.67 0.00 0.00 0.00 0.00 0.00 00:20:12.689 00:20:13.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:13.622 Nvme0n1 : 8.00 9731.25 38.01 0.00 0.00 0.00 0.00 0.00 00:20:13.622 =================================================================================================================== 00:20:13.622 Total : 9731.25 38.01 0.00 0.00 0.00 0.00 0.00 00:20:13.622 00:20:15.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:15.006 Nvme0n1 : 9.00 9732.22 38.02 0.00 0.00 0.00 0.00 0.00 00:20:15.006 =================================================================================================================== 00:20:15.006 Total : 9732.22 38.02 0.00 0.00 0.00 0.00 0.00 00:20:15.006 00:20:15.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:15.941 Nvme0n1 : 10.00 9796.20 38.27 0.00 0.00 0.00 0.00 0.00 00:20:15.941 =================================================================================================================== 00:20:15.941 Total : 9796.20 38.27 0.00 0.00 0.00 0.00 0.00 00:20:15.941 00:20:15.941 00:20:15.941 Latency(us) 00:20:15.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:15.941 Nvme0n1 : 10.00 9800.37 38.28 0.00 0.00 13057.42 4790.18 596298.64 00:20:15.941 =================================================================================================================== 00:20:15.941 Total : 9800.37 38.28 0.00 0.00 13057.42 4790.18 596298.64 00:20:15.941 0 00:20:15.941 15:07:31 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72778 00:20:15.941 15:07:31 -- common/autotest_common.sh@936 -- # '[' -z 72778 ']' 00:20:15.941 15:07:31 -- common/autotest_common.sh@940 -- # kill -0 72778 00:20:15.941 15:07:31 -- common/autotest_common.sh@941 -- # uname 00:20:15.941 15:07:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:15.941 15:07:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72778 00:20:15.941 killing process with pid 72778 00:20:15.941 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.941 00:20:15.941 Latency(us) 00:20:15.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.941 =================================================================================================================== 00:20:15.941 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.941 15:07:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:15.941 15:07:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:15.941 15:07:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72778' 00:20:15.941 15:07:31 -- common/autotest_common.sh@955 -- # kill 72778 00:20:15.941 15:07:31 -- common/autotest_common.sh@960 -- # wait 72778 00:20:15.941 15:07:31 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:16.202 15:07:31 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45fe3258-db23-4d42-a90e-aaa8c37b523b 00:20:16.202 15:07:31 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:20:16.460 15:07:32 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:20:16.460 15:07:32 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:20:16.460 15:07:32 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72187 00:20:16.460 15:07:32 -- target/nvmf_lvs_grow.sh@74 -- # wait 72187 00:20:16.460 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72187 Killed "${NVMF_APP[@]}" "$@" 00:20:16.460 15:07:32 -- target/nvmf_lvs_grow.sh@74 -- # true 00:20:16.460 15:07:32 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:20:16.460 15:07:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:16.460 15:07:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:16.460 15:07:32 -- common/autotest_common.sh@10 -- # set +x 00:20:16.460 15:07:32 -- nvmf/common.sh@470 -- # nvmfpid=72977 00:20:16.460 15:07:32 -- nvmf/common.sh@471 -- # waitforlisten 72977 00:20:16.460 15:07:32 -- common/autotest_common.sh@817 -- # '[' -z 72977 ']' 00:20:16.460 15:07:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.460 15:07:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:16.460 15:07:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.460 15:07:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:16.460 15:07:32 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:16.460 15:07:32 -- common/autotest_common.sh@10 -- # set +x 00:20:16.460 [2024-04-18 15:07:32.101049] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:16.460 [2024-04-18 15:07:32.101140] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.719 [2024-04-18 15:07:32.246221] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.719 [2024-04-18 15:07:32.336795] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.719 [2024-04-18 15:07:32.336858] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.719 [2024-04-18 15:07:32.336869] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.719 [2024-04-18 15:07:32.336878] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.719 [2024-04-18 15:07:32.336885] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.719 [2024-04-18 15:07:32.336930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.503 15:07:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:17.503 15:07:32 -- common/autotest_common.sh@850 -- # return 0 00:20:17.503 15:07:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:17.503 15:07:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:17.503 15:07:32 -- common/autotest_common.sh@10 -- # set +x 00:20:17.503 15:07:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.503 15:07:33 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:17.761 [2024-04-18 15:07:33.213867] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:20:17.761 [2024-04-18 15:07:33.214717] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:20:17.761 [2024-04-18 15:07:33.215119] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:20:17.761 15:07:33 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:20:17.761 15:07:33 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 9faa2a63-6c79-4313-9279-be6c27018641 00:20:17.761 15:07:33 -- common/autotest_common.sh@885 -- # local bdev_name=9faa2a63-6c79-4313-9279-be6c27018641 00:20:17.761 15:07:33 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:17.761 15:07:33 -- common/autotest_common.sh@887 -- # local i 00:20:17.761 15:07:33 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:17.761 15:07:33 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:17.761 15:07:33 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:18.019 15:07:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9faa2a63-6c79-4313-9279-be6c27018641 -t 2000 00:20:18.019 [ 00:20:18.019 { 00:20:18.019 "aliases": [ 00:20:18.019 "lvs/lvol" 00:20:18.019 ], 00:20:18.019 "assigned_rate_limits": { 00:20:18.019 "r_mbytes_per_sec": 0, 00:20:18.019 "rw_ios_per_sec": 0, 00:20:18.019 "rw_mbytes_per_sec": 0, 00:20:18.019 "w_mbytes_per_sec": 0 00:20:18.019 }, 00:20:18.019 "block_size": 4096, 00:20:18.019 "claimed": false, 00:20:18.019 "driver_specific": { 00:20:18.019 "lvol": { 00:20:18.019 "base_bdev": "aio_bdev", 00:20:18.019 "clone": false, 00:20:18.019 "esnap_clone": false, 00:20:18.019 "lvol_store_uuid": "45fe3258-db23-4d42-a90e-aaa8c37b523b", 00:20:18.019 "snapshot": false, 00:20:18.019 "thin_provision": false 00:20:18.019 } 00:20:18.019 }, 00:20:18.019 "name": "9faa2a63-6c79-4313-9279-be6c27018641", 00:20:18.019 "num_blocks": 38912, 00:20:18.019 "product_name": "Logical Volume", 00:20:18.019 "supported_io_types": { 00:20:18.019 "abort": false, 00:20:18.019 "compare": false, 00:20:18.019 "compare_and_write": false, 00:20:18.019 "flush": false, 00:20:18.019 "nvme_admin": false, 00:20:18.019 "nvme_io": false, 00:20:18.019 "read": true, 00:20:18.019 "reset": true, 00:20:18.019 "unmap": true, 00:20:18.019 "write": true, 00:20:18.019 "write_zeroes": true 00:20:18.019 }, 00:20:18.019 "uuid": "9faa2a63-6c79-4313-9279-be6c27018641", 00:20:18.019 "zoned": false 00:20:18.019 } 00:20:18.019 ] 00:20:18.019 15:07:33 -- common/autotest_common.sh@893 -- # return 0 00:20:18.019 15:07:33 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45fe3258-db23-4d42-a90e-aaa8c37b523b 00:20:18.019 15:07:33 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:20:18.277 15:07:33 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:20:18.277 15:07:33 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45fe3258-db23-4d42-a90e-aaa8c37b523b 00:20:18.277 15:07:33 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:20:18.546 15:07:34 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:20:18.546 15:07:34 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:18.819 [2024-04-18 15:07:34.285258] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:20:18.819 15:07:34 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45fe3258-db23-4d42-a90e-aaa8c37b523b 00:20:18.819 15:07:34 -- common/autotest_common.sh@638 -- # local es=0 00:20:18.819 15:07:34 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45fe3258-db23-4d42-a90e-aaa8c37b523b 00:20:18.819 15:07:34 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.819 15:07:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:18.819 15:07:34 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.819 15:07:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:18.819 15:07:34 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.819 15:07:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:18.819 15:07:34 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.819 15:07:34 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:18.819 15:07:34 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45fe3258-db23-4d42-a90e-aaa8c37b523b 00:20:18.819 2024/04/18 15:07:34 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:45fe3258-db23-4d42-a90e-aaa8c37b523b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:20:19.078 request: 00:20:19.078 { 00:20:19.078 "method": "bdev_lvol_get_lvstores", 00:20:19.078 "params": { 00:20:19.078 "uuid": "45fe3258-db23-4d42-a90e-aaa8c37b523b" 00:20:19.078 } 00:20:19.078 } 00:20:19.078 Got JSON-RPC error response 00:20:19.078 GoRPCClient: error on JSON-RPC call 00:20:19.078 15:07:34 -- common/autotest_common.sh@641 -- # es=1 00:20:19.078 15:07:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:19.078 15:07:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:19.078 15:07:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:19.078 15:07:34 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:19.078 aio_bdev 00:20:19.078 15:07:34 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 9faa2a63-6c79-4313-9279-be6c27018641 00:20:19.078 15:07:34 -- common/autotest_common.sh@885 -- # local bdev_name=9faa2a63-6c79-4313-9279-be6c27018641 00:20:19.078 15:07:34 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:19.078 15:07:34 -- common/autotest_common.sh@887 -- # local i 00:20:19.078 15:07:34 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:19.078 15:07:34 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:19.078 15:07:34 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:19.337 15:07:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9faa2a63-6c79-4313-9279-be6c27018641 -t 2000 00:20:19.595 [ 00:20:19.595 { 00:20:19.595 "aliases": [ 00:20:19.595 "lvs/lvol" 00:20:19.595 ], 00:20:19.595 "assigned_rate_limits": { 00:20:19.595 "r_mbytes_per_sec": 0, 00:20:19.595 "rw_ios_per_sec": 0, 00:20:19.595 "rw_mbytes_per_sec": 0, 00:20:19.595 "w_mbytes_per_sec": 0 00:20:19.595 }, 00:20:19.595 "block_size": 4096, 00:20:19.595 "claimed": false, 00:20:19.595 "driver_specific": { 00:20:19.595 "lvol": { 00:20:19.595 "base_bdev": "aio_bdev", 00:20:19.595 "clone": false, 00:20:19.595 "esnap_clone": false, 00:20:19.595 "lvol_store_uuid": "45fe3258-db23-4d42-a90e-aaa8c37b523b", 00:20:19.595 "snapshot": false, 00:20:19.595 "thin_provision": false 00:20:19.595 } 00:20:19.595 }, 00:20:19.595 "name": "9faa2a63-6c79-4313-9279-be6c27018641", 00:20:19.595 "num_blocks": 38912, 00:20:19.595 "product_name": "Logical Volume", 00:20:19.595 "supported_io_types": { 00:20:19.595 "abort": false, 00:20:19.595 "compare": false, 00:20:19.595 "compare_and_write": false, 00:20:19.595 "flush": false, 00:20:19.595 "nvme_admin": false, 00:20:19.595 "nvme_io": false, 00:20:19.595 "read": true, 00:20:19.595 "reset": true, 00:20:19.595 "unmap": true, 00:20:19.595 "write": true, 00:20:19.595 "write_zeroes": true 00:20:19.595 }, 00:20:19.595 "uuid": "9faa2a63-6c79-4313-9279-be6c27018641", 00:20:19.595 "zoned": false 00:20:19.595 } 00:20:19.595 ] 00:20:19.595 15:07:35 -- common/autotest_common.sh@893 -- # return 0 00:20:19.595 15:07:35 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:20:19.595 15:07:35 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45fe3258-db23-4d42-a90e-aaa8c37b523b 00:20:19.854 15:07:35 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:20:19.854 15:07:35 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:20:19.854 15:07:35 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 45fe3258-db23-4d42-a90e-aaa8c37b523b 00:20:20.113 15:07:35 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:20:20.113 15:07:35 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9faa2a63-6c79-4313-9279-be6c27018641 00:20:20.372 15:07:35 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 45fe3258-db23-4d42-a90e-aaa8c37b523b 00:20:20.372 15:07:36 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:20.631 15:07:36 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:21.199 00:20:21.199 real 0m19.175s 00:20:21.199 user 0m38.491s 00:20:21.199 sys 0m7.878s 00:20:21.199 15:07:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:21.199 15:07:36 -- common/autotest_common.sh@10 -- # set +x 00:20:21.199 ************************************ 00:20:21.199 END TEST lvs_grow_dirty 00:20:21.199 ************************************ 00:20:21.199 15:07:36 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:20:21.199 15:07:36 -- common/autotest_common.sh@794 -- # type=--id 00:20:21.199 15:07:36 -- common/autotest_common.sh@795 -- # id=0 00:20:21.199 15:07:36 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:20:21.199 15:07:36 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:21.199 15:07:36 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:20:21.199 15:07:36 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:20:21.199 15:07:36 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:20:21.199 15:07:36 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:21.199 nvmf_trace.0 00:20:21.199 15:07:36 -- common/autotest_common.sh@809 -- # return 0 00:20:21.199 15:07:36 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:20:21.200 15:07:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:21.200 15:07:36 -- nvmf/common.sh@117 -- # sync 00:20:21.459 15:07:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.459 15:07:36 -- nvmf/common.sh@120 -- # set +e 00:20:21.459 15:07:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.459 15:07:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.459 rmmod nvme_tcp 00:20:21.459 rmmod nvme_fabrics 00:20:21.459 rmmod nvme_keyring 00:20:21.459 15:07:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.459 15:07:36 -- nvmf/common.sh@124 -- # set -e 00:20:21.459 15:07:36 -- nvmf/common.sh@125 -- # return 0 00:20:21.459 15:07:36 -- nvmf/common.sh@478 -- # '[' -n 72977 ']' 00:20:21.459 15:07:36 -- nvmf/common.sh@479 -- # killprocess 72977 00:20:21.459 15:07:36 -- common/autotest_common.sh@936 -- # '[' -z 72977 ']' 00:20:21.459 15:07:36 -- common/autotest_common.sh@940 -- # kill -0 72977 00:20:21.459 15:07:36 -- common/autotest_common.sh@941 -- # uname 00:20:21.459 15:07:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:21.459 15:07:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72977 00:20:21.459 15:07:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:21.459 15:07:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:21.459 killing process with pid 72977 00:20:21.459 15:07:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72977' 00:20:21.459 15:07:37 -- common/autotest_common.sh@955 -- # kill 72977 00:20:21.459 15:07:37 -- common/autotest_common.sh@960 -- # wait 72977 00:20:21.727 15:07:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:21.727 15:07:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:21.727 15:07:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:21.727 15:07:37 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.727 15:07:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:21.727 15:07:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.727 15:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.727 15:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.727 15:07:37 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:21.727 00:20:21.727 real 0m38.907s 00:20:21.727 user 0m59.720s 00:20:21.727 sys 0m11.656s 00:20:21.727 15:07:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:21.727 15:07:37 -- common/autotest_common.sh@10 -- # set +x 00:20:21.727 ************************************ 00:20:21.727 END TEST nvmf_lvs_grow 00:20:21.727 ************************************ 00:20:21.727 15:07:37 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:20:21.727 15:07:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:21.727 15:07:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:21.727 15:07:37 -- common/autotest_common.sh@10 -- # set +x 00:20:21.987 ************************************ 00:20:21.987 START TEST nvmf_bdev_io_wait 00:20:21.987 ************************************ 00:20:21.987 15:07:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:20:21.987 * Looking for test storage... 00:20:21.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:21.987 15:07:37 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:21.987 15:07:37 -- nvmf/common.sh@7 -- # uname -s 00:20:21.987 15:07:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.987 15:07:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.987 15:07:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.987 15:07:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.987 15:07:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.987 15:07:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.987 15:07:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.987 15:07:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.987 15:07:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.987 15:07:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.987 15:07:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:20:21.987 15:07:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:20:21.987 15:07:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.987 15:07:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.987 15:07:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.987 15:07:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.987 15:07:37 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.987 15:07:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.987 15:07:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.987 15:07:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.987 15:07:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.987 15:07:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.987 15:07:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.987 15:07:37 -- paths/export.sh@5 -- # export PATH 00:20:21.987 15:07:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.987 15:07:37 -- nvmf/common.sh@47 -- # : 0 00:20:21.987 15:07:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:21.987 15:07:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:21.987 15:07:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.987 15:07:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.987 15:07:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.987 15:07:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:21.987 15:07:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:21.987 15:07:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:21.987 15:07:37 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:21.987 15:07:37 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:21.987 15:07:37 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:20:21.987 15:07:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:21.987 15:07:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.987 15:07:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:21.987 15:07:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:21.987 15:07:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:21.987 15:07:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.987 15:07:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.987 15:07:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.987 15:07:37 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:21.987 15:07:37 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:21.987 15:07:37 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:21.987 15:07:37 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:21.987 15:07:37 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:21.987 15:07:37 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:21.987 15:07:37 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.987 15:07:37 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.987 15:07:37 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:21.987 15:07:37 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:21.987 15:07:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:21.987 15:07:37 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:21.987 15:07:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:21.987 15:07:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.987 15:07:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:21.987 15:07:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:21.987 15:07:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:21.987 15:07:37 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:21.987 15:07:37 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:21.987 15:07:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:21.987 Cannot find device "nvmf_tgt_br" 00:20:21.987 15:07:37 -- nvmf/common.sh@155 -- # true 00:20:21.987 15:07:37 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:22.246 Cannot find device "nvmf_tgt_br2" 00:20:22.246 15:07:37 -- nvmf/common.sh@156 -- # true 00:20:22.246 15:07:37 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:22.246 15:07:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:22.246 Cannot find device "nvmf_tgt_br" 00:20:22.246 15:07:37 -- nvmf/common.sh@158 -- # true 00:20:22.246 15:07:37 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:22.246 Cannot find device "nvmf_tgt_br2" 00:20:22.246 15:07:37 -- nvmf/common.sh@159 -- # true 00:20:22.246 15:07:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:22.246 15:07:37 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:22.247 15:07:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:22.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.247 15:07:37 -- nvmf/common.sh@162 -- # true 00:20:22.247 15:07:37 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:22.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.247 15:07:37 -- nvmf/common.sh@163 -- # true 00:20:22.247 15:07:37 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:22.247 15:07:37 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:22.247 15:07:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:22.247 15:07:37 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:22.247 15:07:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:22.247 15:07:37 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:22.247 15:07:37 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:22.247 15:07:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:22.247 15:07:37 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:22.247 15:07:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:22.247 15:07:37 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:22.247 15:07:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:22.506 15:07:37 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:22.506 15:07:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:22.506 15:07:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:22.506 15:07:37 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:22.506 15:07:37 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:22.506 15:07:37 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:22.506 15:07:37 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:22.506 15:07:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:22.506 15:07:38 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:22.506 15:07:38 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:22.506 15:07:38 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:22.506 15:07:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:22.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:20:22.506 00:20:22.506 --- 10.0.0.2 ping statistics --- 00:20:22.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.506 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:20:22.506 15:07:38 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:22.506 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:22.506 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:20:22.506 00:20:22.506 --- 10.0.0.3 ping statistics --- 00:20:22.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.506 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:20:22.506 15:07:38 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:22.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:22.506 00:20:22.506 --- 10.0.0.1 ping statistics --- 00:20:22.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.506 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:22.506 15:07:38 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.506 15:07:38 -- nvmf/common.sh@422 -- # return 0 00:20:22.506 15:07:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:22.506 15:07:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.506 15:07:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:22.506 15:07:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:22.506 15:07:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.506 15:07:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:22.506 15:07:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:22.506 15:07:38 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:22.506 15:07:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:22.506 15:07:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:22.506 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:20:22.506 15:07:38 -- nvmf/common.sh@470 -- # nvmfpid=73400 00:20:22.506 15:07:38 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:22.506 15:07:38 -- nvmf/common.sh@471 -- # waitforlisten 73400 00:20:22.506 15:07:38 -- common/autotest_common.sh@817 -- # '[' -z 73400 ']' 00:20:22.506 15:07:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.506 15:07:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:22.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.506 15:07:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.506 15:07:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:22.506 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:20:22.506 [2024-04-18 15:07:38.148196] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:22.506 [2024-04-18 15:07:38.148276] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.765 [2024-04-18 15:07:38.291247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.765 [2024-04-18 15:07:38.386172] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.765 [2024-04-18 15:07:38.386237] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.765 [2024-04-18 15:07:38.386248] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.765 [2024-04-18 15:07:38.386256] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.765 [2024-04-18 15:07:38.386264] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.765 [2024-04-18 15:07:38.386369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.766 [2024-04-18 15:07:38.386469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.766 [2024-04-18 15:07:38.387347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.766 [2024-04-18 15:07:38.387350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:23.702 15:07:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:23.702 15:07:39 -- common/autotest_common.sh@850 -- # return 0 00:20:23.702 15:07:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:23.702 15:07:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:23.702 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:20:23.702 15:07:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:20:23.702 15:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.702 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:20:23.702 15:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:20:23.702 15:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.702 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:20:23.702 15:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:23.702 15:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.702 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:20:23.702 [2024-04-18 15:07:39.179721] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.702 15:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:23.702 15:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.702 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:20:23.702 Malloc0 00:20:23.702 15:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:23.702 15:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.702 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:20:23.702 15:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:23.702 15:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.702 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:20:23.702 15:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.702 15:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.702 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:20:23.702 [2024-04-18 15:07:39.242820] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.702 15:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73453 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@30 -- # READ_PID=73455 00:20:23.702 15:07:39 -- nvmf/common.sh@521 -- # config=() 00:20:23.702 15:07:39 -- nvmf/common.sh@521 -- # local subsystem config 00:20:23.702 15:07:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.702 15:07:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.702 { 00:20:23.702 "params": { 00:20:23.702 "name": "Nvme$subsystem", 00:20:23.702 "trtype": "$TEST_TRANSPORT", 00:20:23.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.702 "adrfam": "ipv4", 00:20:23.702 "trsvcid": "$NVMF_PORT", 00:20:23.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.702 "hdgst": ${hdgst:-false}, 00:20:23.702 "ddgst": ${ddgst:-false} 00:20:23.702 }, 00:20:23.702 "method": "bdev_nvme_attach_controller" 00:20:23.702 } 00:20:23.702 EOF 00:20:23.702 )") 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73457 00:20:23.702 15:07:39 -- nvmf/common.sh@521 -- # config=() 00:20:23.702 15:07:39 -- nvmf/common.sh@521 -- # local subsystem config 00:20:23.702 15:07:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.702 15:07:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.702 { 00:20:23.702 "params": { 00:20:23.702 "name": "Nvme$subsystem", 00:20:23.702 "trtype": "$TEST_TRANSPORT", 00:20:23.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.702 "adrfam": "ipv4", 00:20:23.702 "trsvcid": "$NVMF_PORT", 00:20:23.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.702 "hdgst": ${hdgst:-false}, 00:20:23.702 "ddgst": ${ddgst:-false} 00:20:23.702 }, 00:20:23.702 "method": "bdev_nvme_attach_controller" 00:20:23.702 } 00:20:23.702 EOF 00:20:23.702 )") 00:20:23.702 15:07:39 -- nvmf/common.sh@543 -- # cat 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:20:23.702 15:07:39 -- nvmf/common.sh@521 -- # config=() 00:20:23.702 15:07:39 -- nvmf/common.sh@521 -- # local subsystem config 00:20:23.702 15:07:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.702 15:07:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.702 { 00:20:23.702 "params": { 00:20:23.702 "name": "Nvme$subsystem", 00:20:23.702 "trtype": "$TEST_TRANSPORT", 00:20:23.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.702 "adrfam": "ipv4", 00:20:23.702 "trsvcid": "$NVMF_PORT", 00:20:23.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.702 "hdgst": ${hdgst:-false}, 00:20:23.702 "ddgst": ${ddgst:-false} 00:20:23.702 }, 00:20:23.702 "method": "bdev_nvme_attach_controller" 00:20:23.702 } 00:20:23.702 EOF 00:20:23.702 )") 00:20:23.702 15:07:39 -- nvmf/common.sh@543 -- # cat 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73462 00:20:23.702 15:07:39 -- target/bdev_io_wait.sh@35 -- # sync 00:20:23.702 15:07:39 -- nvmf/common.sh@543 -- # cat 00:20:23.703 15:07:39 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:20:23.703 15:07:39 -- nvmf/common.sh@545 -- # jq . 00:20:23.703 15:07:39 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:20:23.703 15:07:39 -- nvmf/common.sh@521 -- # config=() 00:20:23.703 15:07:39 -- nvmf/common.sh@521 -- # local subsystem config 00:20:23.703 15:07:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.703 15:07:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.703 { 00:20:23.703 "params": { 00:20:23.703 "name": "Nvme$subsystem", 00:20:23.703 "trtype": "$TEST_TRANSPORT", 00:20:23.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.703 "adrfam": "ipv4", 00:20:23.703 "trsvcid": "$NVMF_PORT", 00:20:23.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.703 "hdgst": ${hdgst:-false}, 00:20:23.703 "ddgst": ${ddgst:-false} 00:20:23.703 }, 00:20:23.703 "method": "bdev_nvme_attach_controller" 00:20:23.703 } 00:20:23.703 EOF 00:20:23.703 )") 00:20:23.703 15:07:39 -- nvmf/common.sh@545 -- # jq . 00:20:23.703 15:07:39 -- nvmf/common.sh@543 -- # cat 00:20:23.703 15:07:39 -- nvmf/common.sh@546 -- # IFS=, 00:20:23.703 15:07:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:23.703 "params": { 00:20:23.703 "name": "Nvme1", 00:20:23.703 "trtype": "tcp", 00:20:23.703 "traddr": "10.0.0.2", 00:20:23.703 "adrfam": "ipv4", 00:20:23.703 "trsvcid": "4420", 00:20:23.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.703 "hdgst": false, 00:20:23.703 "ddgst": false 00:20:23.703 }, 00:20:23.703 "method": "bdev_nvme_attach_controller" 00:20:23.703 }' 00:20:23.703 15:07:39 -- nvmf/common.sh@545 -- # jq . 00:20:23.703 15:07:39 -- nvmf/common.sh@546 -- # IFS=, 00:20:23.703 15:07:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:23.703 "params": { 00:20:23.703 "name": "Nvme1", 00:20:23.703 "trtype": "tcp", 00:20:23.703 "traddr": "10.0.0.2", 00:20:23.703 "adrfam": "ipv4", 00:20:23.703 "trsvcid": "4420", 00:20:23.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.703 "hdgst": false, 00:20:23.703 "ddgst": false 00:20:23.703 }, 00:20:23.703 "method": "bdev_nvme_attach_controller" 00:20:23.703 }' 00:20:23.703 15:07:39 -- nvmf/common.sh@545 -- # jq . 00:20:23.703 15:07:39 -- nvmf/common.sh@546 -- # IFS=, 00:20:23.703 15:07:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:23.703 "params": { 00:20:23.703 "name": "Nvme1", 00:20:23.703 "trtype": "tcp", 00:20:23.703 "traddr": "10.0.0.2", 00:20:23.703 "adrfam": "ipv4", 00:20:23.703 "trsvcid": "4420", 00:20:23.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.703 "hdgst": false, 00:20:23.703 "ddgst": false 00:20:23.703 }, 00:20:23.703 "method": "bdev_nvme_attach_controller" 00:20:23.703 }' 00:20:23.703 15:07:39 -- nvmf/common.sh@546 -- # IFS=, 00:20:23.703 15:07:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:23.703 "params": { 00:20:23.703 "name": "Nvme1", 00:20:23.703 "trtype": "tcp", 00:20:23.703 "traddr": "10.0.0.2", 00:20:23.703 "adrfam": "ipv4", 00:20:23.703 "trsvcid": "4420", 00:20:23.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.703 "hdgst": false, 00:20:23.703 "ddgst": false 00:20:23.703 }, 00:20:23.703 "method": "bdev_nvme_attach_controller" 00:20:23.703 }' 00:20:23.703 [2024-04-18 15:07:39.303989] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:23.703 [2024-04-18 15:07:39.304063] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:23.703 15:07:39 -- target/bdev_io_wait.sh@37 -- # wait 73453 00:20:23.703 [2024-04-18 15:07:39.309774] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:23.703 [2024-04-18 15:07:39.310181] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:20:23.703 [2024-04-18 15:07:39.319131] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:23.703 [2024-04-18 15:07:39.319193] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:20:23.703 [2024-04-18 15:07:39.332517] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:23.703 [2024-04-18 15:07:39.332622] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:20:23.961 [2024-04-18 15:07:39.490317] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.961 [2024-04-18 15:07:39.561440] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.961 [2024-04-18 15:07:39.569074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:23.961 [2024-04-18 15:07:39.629779] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.961 [2024-04-18 15:07:39.638195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:24.218 [2024-04-18 15:07:39.703773] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.218 Running I/O for 1 seconds... 00:20:24.218 [2024-04-18 15:07:39.713803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:20:24.218 [2024-04-18 15:07:39.779351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:24.218 Running I/O for 1 seconds... 00:20:24.218 Running I/O for 1 seconds... 00:20:24.476 Running I/O for 1 seconds... 00:20:25.041 00:20:25.041 Latency(us) 00:20:25.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.041 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:20:25.041 Nvme1n1 : 1.01 11306.51 44.17 0.00 0.00 11281.09 6422.00 20845.19 00:20:25.041 =================================================================================================================== 00:20:25.041 Total : 11306.51 44.17 0.00 0.00 11281.09 6422.00 20845.19 00:20:25.300 00:20:25.300 Latency(us) 00:20:25.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.300 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:20:25.300 Nvme1n1 : 1.01 8676.57 33.89 0.00 0.00 14685.04 8580.22 24529.94 00:20:25.300 =================================================================================================================== 00:20:25.300 Total : 8676.57 33.89 0.00 0.00 14685.04 8580.22 24529.94 00:20:25.300 00:20:25.300 Latency(us) 00:20:25.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.300 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:20:25.300 Nvme1n1 : 1.01 9867.32 38.54 0.00 0.00 12928.86 6027.21 26003.84 00:20:25.300 =================================================================================================================== 00:20:25.300 Total : 9867.32 38.54 0.00 0.00 12928.86 6027.21 26003.84 00:20:25.300 00:20:25.300 Latency(us) 00:20:25.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.300 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:20:25.300 Nvme1n1 : 1.00 217277.08 848.74 0.00 0.00 586.90 256.62 1026.47 00:20:25.300 =================================================================================================================== 00:20:25.300 Total : 217277.08 848.74 0.00 0.00 586.90 256.62 1026.47 00:20:25.300 15:07:40 -- target/bdev_io_wait.sh@38 -- # wait 73455 00:20:25.559 15:07:41 -- target/bdev_io_wait.sh@39 -- # wait 73457 00:20:25.559 15:07:41 -- target/bdev_io_wait.sh@40 -- # wait 73462 00:20:25.559 15:07:41 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:25.559 15:07:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.559 15:07:41 -- common/autotest_common.sh@10 -- # set +x 00:20:25.560 15:07:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.560 15:07:41 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:20:25.560 15:07:41 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:20:25.560 15:07:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:25.560 15:07:41 -- nvmf/common.sh@117 -- # sync 00:20:25.818 15:07:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:25.818 15:07:41 -- nvmf/common.sh@120 -- # set +e 00:20:25.818 15:07:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.818 15:07:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:25.818 rmmod nvme_tcp 00:20:25.818 rmmod nvme_fabrics 00:20:25.818 rmmod nvme_keyring 00:20:25.818 15:07:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.818 15:07:41 -- nvmf/common.sh@124 -- # set -e 00:20:25.818 15:07:41 -- nvmf/common.sh@125 -- # return 0 00:20:25.818 15:07:41 -- nvmf/common.sh@478 -- # '[' -n 73400 ']' 00:20:25.818 15:07:41 -- nvmf/common.sh@479 -- # killprocess 73400 00:20:25.818 15:07:41 -- common/autotest_common.sh@936 -- # '[' -z 73400 ']' 00:20:25.818 15:07:41 -- common/autotest_common.sh@940 -- # kill -0 73400 00:20:25.818 15:07:41 -- common/autotest_common.sh@941 -- # uname 00:20:25.818 15:07:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:25.818 15:07:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73400 00:20:25.818 killing process with pid 73400 00:20:25.818 15:07:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:25.818 15:07:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:25.818 15:07:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73400' 00:20:25.818 15:07:41 -- common/autotest_common.sh@955 -- # kill 73400 00:20:25.818 15:07:41 -- common/autotest_common.sh@960 -- # wait 73400 00:20:26.077 15:07:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:26.077 15:07:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:26.077 15:07:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:26.077 15:07:41 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.077 15:07:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:26.077 15:07:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.077 15:07:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.077 15:07:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.077 15:07:41 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:26.077 00:20:26.077 real 0m4.172s 00:20:26.077 user 0m17.552s 00:20:26.077 sys 0m2.217s 00:20:26.077 15:07:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:26.077 15:07:41 -- common/autotest_common.sh@10 -- # set +x 00:20:26.077 ************************************ 00:20:26.077 END TEST nvmf_bdev_io_wait 00:20:26.077 ************************************ 00:20:26.077 15:07:41 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:26.077 15:07:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:26.077 15:07:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:26.077 15:07:41 -- common/autotest_common.sh@10 -- # set +x 00:20:26.336 ************************************ 00:20:26.336 START TEST nvmf_queue_depth 00:20:26.336 ************************************ 00:20:26.336 15:07:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:20:26.336 * Looking for test storage... 00:20:26.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:26.336 15:07:41 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:26.336 15:07:41 -- nvmf/common.sh@7 -- # uname -s 00:20:26.336 15:07:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.336 15:07:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.336 15:07:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.336 15:07:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.336 15:07:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.336 15:07:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.336 15:07:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.336 15:07:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.336 15:07:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.336 15:07:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.336 15:07:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:20:26.336 15:07:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:20:26.336 15:07:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.336 15:07:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.336 15:07:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:26.336 15:07:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.336 15:07:41 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:26.336 15:07:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.336 15:07:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.336 15:07:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.336 15:07:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.336 15:07:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.337 15:07:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.337 15:07:41 -- paths/export.sh@5 -- # export PATH 00:20:26.337 15:07:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.337 15:07:41 -- nvmf/common.sh@47 -- # : 0 00:20:26.337 15:07:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:26.337 15:07:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:26.337 15:07:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.337 15:07:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.337 15:07:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.337 15:07:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:26.337 15:07:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:26.337 15:07:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:26.337 15:07:41 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:20:26.337 15:07:41 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:20:26.337 15:07:41 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:26.337 15:07:41 -- target/queue_depth.sh@19 -- # nvmftestinit 00:20:26.337 15:07:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:26.337 15:07:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.337 15:07:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:26.337 15:07:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:26.337 15:07:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:26.337 15:07:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.337 15:07:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.337 15:07:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.337 15:07:41 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:26.337 15:07:41 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:26.337 15:07:41 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:26.337 15:07:41 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:26.337 15:07:41 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:26.337 15:07:41 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:26.337 15:07:41 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.337 15:07:41 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.337 15:07:41 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:26.337 15:07:41 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:26.337 15:07:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:26.337 15:07:41 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:26.337 15:07:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:26.337 15:07:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.337 15:07:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:26.337 15:07:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:26.337 15:07:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:26.337 15:07:41 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:26.337 15:07:41 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:26.337 15:07:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:26.337 Cannot find device "nvmf_tgt_br" 00:20:26.337 15:07:42 -- nvmf/common.sh@155 -- # true 00:20:26.337 15:07:42 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:26.337 Cannot find device "nvmf_tgt_br2" 00:20:26.337 15:07:42 -- nvmf/common.sh@156 -- # true 00:20:26.337 15:07:42 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:26.337 15:07:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:26.596 Cannot find device "nvmf_tgt_br" 00:20:26.596 15:07:42 -- nvmf/common.sh@158 -- # true 00:20:26.596 15:07:42 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:26.596 Cannot find device "nvmf_tgt_br2" 00:20:26.596 15:07:42 -- nvmf/common.sh@159 -- # true 00:20:26.596 15:07:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:26.596 15:07:42 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:26.596 15:07:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:26.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.596 15:07:42 -- nvmf/common.sh@162 -- # true 00:20:26.596 15:07:42 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:26.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.596 15:07:42 -- nvmf/common.sh@163 -- # true 00:20:26.596 15:07:42 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:26.596 15:07:42 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:26.596 15:07:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:26.596 15:07:42 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:26.596 15:07:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:26.596 15:07:42 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:26.596 15:07:42 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:26.596 15:07:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:26.596 15:07:42 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:26.596 15:07:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:26.596 15:07:42 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:26.596 15:07:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:26.596 15:07:42 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:26.596 15:07:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:26.596 15:07:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:26.596 15:07:42 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:26.596 15:07:42 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:26.596 15:07:42 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:26.596 15:07:42 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:26.855 15:07:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:26.855 15:07:42 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:26.855 15:07:42 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:26.855 15:07:42 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:26.855 15:07:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:26.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:20:26.855 00:20:26.855 --- 10.0.0.2 ping statistics --- 00:20:26.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.855 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:20:26.855 15:07:42 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:26.855 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:26.855 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:20:26.855 00:20:26.855 --- 10.0.0.3 ping statistics --- 00:20:26.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.855 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:20:26.855 15:07:42 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:26.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:20:26.855 00:20:26.855 --- 10.0.0.1 ping statistics --- 00:20:26.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.855 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:26.855 15:07:42 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.855 15:07:42 -- nvmf/common.sh@422 -- # return 0 00:20:26.855 15:07:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:26.855 15:07:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.855 15:07:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:26.855 15:07:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:26.855 15:07:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.855 15:07:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:26.855 15:07:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:26.855 15:07:42 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:20:26.855 15:07:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:26.855 15:07:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:26.855 15:07:42 -- common/autotest_common.sh@10 -- # set +x 00:20:26.855 15:07:42 -- nvmf/common.sh@470 -- # nvmfpid=73697 00:20:26.855 15:07:42 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:26.855 15:07:42 -- nvmf/common.sh@471 -- # waitforlisten 73697 00:20:26.855 15:07:42 -- common/autotest_common.sh@817 -- # '[' -z 73697 ']' 00:20:26.855 15:07:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.855 15:07:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:26.855 15:07:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.855 15:07:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:26.855 15:07:42 -- common/autotest_common.sh@10 -- # set +x 00:20:26.855 [2024-04-18 15:07:42.480708] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:26.855 [2024-04-18 15:07:42.480781] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.195 [2024-04-18 15:07:42.608757] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.195 [2024-04-18 15:07:42.705725] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.195 [2024-04-18 15:07:42.705960] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.195 [2024-04-18 15:07:42.706023] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.195 [2024-04-18 15:07:42.706093] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.195 [2024-04-18 15:07:42.706136] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.195 [2024-04-18 15:07:42.706207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.779 15:07:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:27.779 15:07:43 -- common/autotest_common.sh@850 -- # return 0 00:20:27.779 15:07:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:27.779 15:07:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:27.779 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:20:27.779 15:07:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.779 15:07:43 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:27.779 15:07:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.779 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:20:27.779 [2024-04-18 15:07:43.425089] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.779 15:07:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.779 15:07:43 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:27.779 15:07:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.779 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:20:27.779 Malloc0 00:20:27.779 15:07:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.779 15:07:43 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:27.779 15:07:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.779 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:20:28.037 15:07:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.037 15:07:43 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:28.037 15:07:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.037 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:20:28.037 15:07:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.037 15:07:43 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:28.037 15:07:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.037 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:20:28.037 [2024-04-18 15:07:43.508911] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.037 15:07:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.037 15:07:43 -- target/queue_depth.sh@30 -- # bdevperf_pid=73747 00:20:28.037 15:07:43 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:20:28.037 15:07:43 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:28.037 15:07:43 -- target/queue_depth.sh@33 -- # waitforlisten 73747 /var/tmp/bdevperf.sock 00:20:28.037 15:07:43 -- common/autotest_common.sh@817 -- # '[' -z 73747 ']' 00:20:28.037 15:07:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.037 15:07:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:28.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.037 15:07:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.037 15:07:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:28.037 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:20:28.037 [2024-04-18 15:07:43.574284] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:28.037 [2024-04-18 15:07:43.574380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73747 ] 00:20:28.037 [2024-04-18 15:07:43.707074] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.296 [2024-04-18 15:07:43.798533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.865 15:07:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:28.865 15:07:44 -- common/autotest_common.sh@850 -- # return 0 00:20:28.865 15:07:44 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:28.865 15:07:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.865 15:07:44 -- common/autotest_common.sh@10 -- # set +x 00:20:28.865 NVMe0n1 00:20:28.865 15:07:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.865 15:07:44 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:29.124 Running I/O for 10 seconds... 00:20:39.111 00:20:39.111 Latency(us) 00:20:39.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.111 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:20:39.111 Verification LBA range: start 0x0 length 0x4000 00:20:39.111 NVMe0n1 : 10.06 11443.72 44.70 0.00 0.00 89145.48 19371.28 90539.69 00:20:39.111 =================================================================================================================== 00:20:39.111 Total : 11443.72 44.70 0.00 0.00 89145.48 19371.28 90539.69 00:20:39.111 0 00:20:39.111 15:07:54 -- target/queue_depth.sh@39 -- # killprocess 73747 00:20:39.111 15:07:54 -- common/autotest_common.sh@936 -- # '[' -z 73747 ']' 00:20:39.111 15:07:54 -- common/autotest_common.sh@940 -- # kill -0 73747 00:20:39.111 15:07:54 -- common/autotest_common.sh@941 -- # uname 00:20:39.111 15:07:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:39.111 15:07:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73747 00:20:39.111 15:07:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:39.111 15:07:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:39.111 killing process with pid 73747 00:20:39.111 15:07:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73747' 00:20:39.111 15:07:54 -- common/autotest_common.sh@955 -- # kill 73747 00:20:39.111 Received shutdown signal, test time was about 10.000000 seconds 00:20:39.111 00:20:39.111 Latency(us) 00:20:39.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.111 =================================================================================================================== 00:20:39.111 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.111 15:07:54 -- common/autotest_common.sh@960 -- # wait 73747 00:20:39.370 15:07:54 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:20:39.370 15:07:54 -- target/queue_depth.sh@43 -- # nvmftestfini 00:20:39.370 15:07:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:39.370 15:07:54 -- nvmf/common.sh@117 -- # sync 00:20:39.370 15:07:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.370 15:07:55 -- nvmf/common.sh@120 -- # set +e 00:20:39.370 15:07:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.370 15:07:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:39.370 rmmod nvme_tcp 00:20:39.370 rmmod nvme_fabrics 00:20:39.370 rmmod nvme_keyring 00:20:39.370 15:07:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.630 15:07:55 -- nvmf/common.sh@124 -- # set -e 00:20:39.630 15:07:55 -- nvmf/common.sh@125 -- # return 0 00:20:39.630 15:07:55 -- nvmf/common.sh@478 -- # '[' -n 73697 ']' 00:20:39.630 15:07:55 -- nvmf/common.sh@479 -- # killprocess 73697 00:20:39.630 15:07:55 -- common/autotest_common.sh@936 -- # '[' -z 73697 ']' 00:20:39.630 15:07:55 -- common/autotest_common.sh@940 -- # kill -0 73697 00:20:39.630 15:07:55 -- common/autotest_common.sh@941 -- # uname 00:20:39.630 15:07:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:39.630 15:07:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73697 00:20:39.630 15:07:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:39.630 15:07:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:39.630 killing process with pid 73697 00:20:39.630 15:07:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73697' 00:20:39.630 15:07:55 -- common/autotest_common.sh@955 -- # kill 73697 00:20:39.630 15:07:55 -- common/autotest_common.sh@960 -- # wait 73697 00:20:39.890 15:07:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:39.890 15:07:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:39.890 15:07:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:39.890 15:07:55 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.890 15:07:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.890 15:07:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.890 15:07:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.890 15:07:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.890 15:07:55 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:39.890 00:20:39.890 real 0m13.629s 00:20:39.890 user 0m22.822s 00:20:39.890 sys 0m2.461s 00:20:39.890 15:07:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:39.890 15:07:55 -- common/autotest_common.sh@10 -- # set +x 00:20:39.890 ************************************ 00:20:39.890 END TEST nvmf_queue_depth 00:20:39.890 ************************************ 00:20:39.890 15:07:55 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:39.890 15:07:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:39.890 15:07:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:39.890 15:07:55 -- common/autotest_common.sh@10 -- # set +x 00:20:39.890 ************************************ 00:20:39.890 START TEST nvmf_multipath 00:20:39.890 ************************************ 00:20:39.890 15:07:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:20:40.149 * Looking for test storage... 00:20:40.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:40.149 15:07:55 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:40.149 15:07:55 -- nvmf/common.sh@7 -- # uname -s 00:20:40.149 15:07:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.149 15:07:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.149 15:07:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.149 15:07:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.149 15:07:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.149 15:07:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.149 15:07:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.149 15:07:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.149 15:07:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.149 15:07:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.149 15:07:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:20:40.149 15:07:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:20:40.149 15:07:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.149 15:07:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.149 15:07:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:40.149 15:07:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.149 15:07:55 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:40.149 15:07:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.149 15:07:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.149 15:07:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.149 15:07:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.149 15:07:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.150 15:07:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.150 15:07:55 -- paths/export.sh@5 -- # export PATH 00:20:40.150 15:07:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.150 15:07:55 -- nvmf/common.sh@47 -- # : 0 00:20:40.150 15:07:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:40.150 15:07:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:40.150 15:07:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.150 15:07:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.150 15:07:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.150 15:07:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:40.150 15:07:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:40.150 15:07:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:40.150 15:07:55 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:40.150 15:07:55 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:40.150 15:07:55 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:40.150 15:07:55 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:40.150 15:07:55 -- target/multipath.sh@43 -- # nvmftestinit 00:20:40.150 15:07:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:40.150 15:07:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.150 15:07:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:40.150 15:07:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:40.150 15:07:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:40.150 15:07:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.150 15:07:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.150 15:07:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.150 15:07:55 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:40.150 15:07:55 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:40.150 15:07:55 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:40.150 15:07:55 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:40.150 15:07:55 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:40.150 15:07:55 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:40.150 15:07:55 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.150 15:07:55 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.150 15:07:55 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:40.150 15:07:55 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:40.150 15:07:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:40.150 15:07:55 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:40.150 15:07:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:40.150 15:07:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.150 15:07:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:40.150 15:07:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:40.150 15:07:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:40.150 15:07:55 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:40.150 15:07:55 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:40.150 15:07:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:40.150 Cannot find device "nvmf_tgt_br" 00:20:40.150 15:07:55 -- nvmf/common.sh@155 -- # true 00:20:40.150 15:07:55 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:40.150 Cannot find device "nvmf_tgt_br2" 00:20:40.150 15:07:55 -- nvmf/common.sh@156 -- # true 00:20:40.150 15:07:55 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:40.150 15:07:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:40.150 Cannot find device "nvmf_tgt_br" 00:20:40.150 15:07:55 -- nvmf/common.sh@158 -- # true 00:20:40.150 15:07:55 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:40.150 Cannot find device "nvmf_tgt_br2" 00:20:40.150 15:07:55 -- nvmf/common.sh@159 -- # true 00:20:40.150 15:07:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:40.410 15:07:55 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:40.410 15:07:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.410 15:07:55 -- nvmf/common.sh@162 -- # true 00:20:40.410 15:07:55 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.410 15:07:55 -- nvmf/common.sh@163 -- # true 00:20:40.410 15:07:55 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:40.410 15:07:55 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:40.410 15:07:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:40.410 15:07:55 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:40.410 15:07:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:40.410 15:07:55 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:40.410 15:07:55 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:40.410 15:07:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:40.410 15:07:56 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:40.410 15:07:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:40.410 15:07:56 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:40.410 15:07:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:40.410 15:07:56 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:40.410 15:07:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:40.410 15:07:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:40.410 15:07:56 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:40.410 15:07:56 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:40.410 15:07:56 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:40.410 15:07:56 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:40.410 15:07:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:40.410 15:07:56 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:40.410 15:07:56 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:40.410 15:07:56 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:40.410 15:07:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:40.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:20:40.410 00:20:40.410 --- 10.0.0.2 ping statistics --- 00:20:40.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.410 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:20:40.410 15:07:56 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:40.410 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:40.410 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:20:40.410 00:20:40.410 --- 10.0.0.3 ping statistics --- 00:20:40.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.410 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:40.410 15:07:56 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:40.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:40.671 00:20:40.671 --- 10.0.0.1 ping statistics --- 00:20:40.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.671 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:40.671 15:07:56 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.671 15:07:56 -- nvmf/common.sh@422 -- # return 0 00:20:40.671 15:07:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:40.671 15:07:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.671 15:07:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:40.671 15:07:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:40.671 15:07:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.671 15:07:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:40.671 15:07:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:40.671 15:07:56 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:20:40.671 15:07:56 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:20:40.671 15:07:56 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:20:40.671 15:07:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:40.671 15:07:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:40.671 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:20:40.671 15:07:56 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:40.671 15:07:56 -- nvmf/common.sh@470 -- # nvmfpid=74083 00:20:40.671 15:07:56 -- nvmf/common.sh@471 -- # waitforlisten 74083 00:20:40.671 15:07:56 -- common/autotest_common.sh@817 -- # '[' -z 74083 ']' 00:20:40.671 15:07:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.671 15:07:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:40.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.671 15:07:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.671 15:07:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:40.671 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:20:40.671 [2024-04-18 15:07:56.201405] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:20:40.672 [2024-04-18 15:07:56.201487] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.672 [2024-04-18 15:07:56.344567] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.930 [2024-04-18 15:07:56.441900] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.930 [2024-04-18 15:07:56.441961] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.930 [2024-04-18 15:07:56.441971] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.930 [2024-04-18 15:07:56.441980] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.930 [2024-04-18 15:07:56.441987] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.930 [2024-04-18 15:07:56.442916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.930 [2024-04-18 15:07:56.443000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.930 [2024-04-18 15:07:56.443088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.930 [2024-04-18 15:07:56.443090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.497 15:07:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:41.497 15:07:57 -- common/autotest_common.sh@850 -- # return 0 00:20:41.498 15:07:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:41.498 15:07:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:41.498 15:07:57 -- common/autotest_common.sh@10 -- # set +x 00:20:41.498 15:07:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.498 15:07:57 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:41.756 [2024-04-18 15:07:57.340690] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.756 15:07:57 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:42.018 Malloc0 00:20:42.018 15:07:57 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:20:42.276 15:07:57 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:42.535 15:07:58 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:42.535 [2024-04-18 15:07:58.203870] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.535 15:07:58 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:42.794 [2024-04-18 15:07:58.399821] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:42.794 15:07:58 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:20:43.054 15:07:58 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:20:43.313 15:07:58 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:20:43.313 15:07:58 -- common/autotest_common.sh@1184 -- # local i=0 00:20:43.313 15:07:58 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:43.313 15:07:58 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:43.313 15:07:58 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:45.218 15:08:00 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:45.218 15:08:00 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:45.218 15:08:00 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:45.218 15:08:00 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:45.218 15:08:00 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:45.218 15:08:00 -- common/autotest_common.sh@1194 -- # return 0 00:20:45.218 15:08:00 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:20:45.218 15:08:00 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:20:45.218 15:08:00 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:20:45.218 15:08:00 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:20:45.218 15:08:00 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:20:45.218 15:08:00 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:20:45.218 15:08:00 -- target/multipath.sh@38 -- # return 0 00:20:45.218 15:08:00 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:20:45.218 15:08:00 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:20:45.219 15:08:00 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:20:45.219 15:08:00 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:20:45.219 15:08:00 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:20:45.219 15:08:00 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:20:45.219 15:08:00 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:20:45.219 15:08:00 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:20:45.219 15:08:00 -- target/multipath.sh@22 -- # local timeout=20 00:20:45.219 15:08:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:45.219 15:08:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:45.219 15:08:00 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:20:45.219 15:08:00 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:20:45.219 15:08:00 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:20:45.219 15:08:00 -- target/multipath.sh@22 -- # local timeout=20 00:20:45.219 15:08:00 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:45.219 15:08:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:45.219 15:08:00 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:20:45.219 15:08:00 -- target/multipath.sh@85 -- # echo numa 00:20:45.219 15:08:00 -- target/multipath.sh@88 -- # fio_pid=74221 00:20:45.219 15:08:00 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:20:45.219 15:08:00 -- target/multipath.sh@90 -- # sleep 1 00:20:45.478 [global] 00:20:45.478 thread=1 00:20:45.478 invalidate=1 00:20:45.478 rw=randrw 00:20:45.478 time_based=1 00:20:45.479 runtime=6 00:20:45.479 ioengine=libaio 00:20:45.479 direct=1 00:20:45.479 bs=4096 00:20:45.479 iodepth=128 00:20:45.479 norandommap=0 00:20:45.479 numjobs=1 00:20:45.479 00:20:45.479 verify_dump=1 00:20:45.479 verify_backlog=512 00:20:45.479 verify_state_save=0 00:20:45.479 do_verify=1 00:20:45.479 verify=crc32c-intel 00:20:45.479 [job0] 00:20:45.479 filename=/dev/nvme0n1 00:20:45.479 Could not set queue depth (nvme0n1) 00:20:45.479 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:45.479 fio-3.35 00:20:45.479 Starting 1 thread 00:20:46.415 15:08:01 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:46.674 15:08:02 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:46.674 15:08:02 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:20:46.674 15:08:02 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:20:46.674 15:08:02 -- target/multipath.sh@22 -- # local timeout=20 00:20:46.674 15:08:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:46.674 15:08:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:46.675 15:08:02 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:46.932 15:08:02 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:20:46.932 15:08:02 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:20:46.932 15:08:02 -- target/multipath.sh@22 -- # local timeout=20 00:20:46.932 15:08:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:46.932 15:08:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:46.932 15:08:02 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:46.932 15:08:02 -- target/multipath.sh@25 -- # sleep 1s 00:20:47.868 15:08:03 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:20:47.868 15:08:03 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:47.868 15:08:03 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:47.868 15:08:03 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:48.126 15:08:03 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:48.126 15:08:03 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:20:48.126 15:08:03 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:20:48.126 15:08:03 -- target/multipath.sh@22 -- # local timeout=20 00:20:48.126 15:08:03 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:48.126 15:08:03 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:48.126 15:08:03 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:48.126 15:08:03 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:20:48.126 15:08:03 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:20:48.126 15:08:03 -- target/multipath.sh@22 -- # local timeout=20 00:20:48.126 15:08:03 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:48.126 15:08:03 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:48.126 15:08:03 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:48.126 15:08:03 -- target/multipath.sh@25 -- # sleep 1s 00:20:49.516 15:08:04 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:20:49.516 15:08:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:49.516 15:08:04 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:49.516 15:08:04 -- target/multipath.sh@104 -- # wait 74221 00:20:52.098 00:20:52.098 job0: (groupid=0, jobs=1): err= 0: pid=74246: Thu Apr 18 15:08:07 2024 00:20:52.098 read: IOPS=13.9k, BW=54.2MiB/s (56.8MB/s)(325MiB/6004msec) 00:20:52.098 slat (usec): min=4, max=4542, avg=38.46, stdev=151.73 00:20:52.098 clat (usec): min=383, max=16930, avg=6312.60, stdev=1077.39 00:20:52.098 lat (usec): min=469, max=17677, avg=6351.06, stdev=1082.39 00:20:52.098 clat percentiles (usec): 00:20:52.098 | 1.00th=[ 3851], 5.00th=[ 4817], 10.00th=[ 5211], 20.00th=[ 5604], 00:20:52.098 | 30.00th=[ 5866], 40.00th=[ 6063], 50.00th=[ 6259], 60.00th=[ 6456], 00:20:52.098 | 70.00th=[ 6652], 80.00th=[ 6915], 90.00th=[ 7439], 95.00th=[ 8225], 00:20:52.098 | 99.00th=[ 9634], 99.50th=[10290], 99.90th=[13304], 99.95th=[13829], 00:20:52.098 | 99.99th=[16057] 00:20:52.098 bw ( KiB/s): min= 9688, max=37896, per=50.70%, avg=28135.27, stdev=9776.28, samples=11 00:20:52.098 iops : min= 2422, max= 9474, avg=7033.82, stdev=2444.07, samples=11 00:20:52.098 write: IOPS=8404, BW=32.8MiB/s (34.4MB/s)(169MiB/5138msec); 0 zone resets 00:20:52.098 slat (usec): min=12, max=1577, avg=52.72, stdev=95.34 00:20:52.098 clat (usec): min=243, max=16797, avg=5426.42, stdev=1033.96 00:20:52.098 lat (usec): min=389, max=16848, avg=5479.15, stdev=1035.67 00:20:52.098 clat percentiles (usec): 00:20:52.098 | 1.00th=[ 2900], 5.00th=[ 3884], 10.00th=[ 4293], 20.00th=[ 4752], 00:20:52.098 | 30.00th=[ 5014], 40.00th=[ 5276], 50.00th=[ 5407], 60.00th=[ 5604], 00:20:52.098 | 70.00th=[ 5800], 80.00th=[ 5997], 90.00th=[ 6325], 95.00th=[ 6980], 00:20:52.098 | 99.00th=[ 9110], 99.50th=[ 9634], 99.90th=[11076], 99.95th=[12911], 00:20:52.098 | 99.99th=[15795] 00:20:52.098 bw ( KiB/s): min=10000, max=37328, per=83.69%, avg=28132.36, stdev=9484.53, samples=11 00:20:52.098 iops : min= 2500, max= 9332, avg=7033.09, stdev=2371.13, samples=11 00:20:52.098 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:20:52.098 lat (msec) : 2=0.17%, 4=2.82%, 10=96.46%, 20=0.54% 00:20:52.098 cpu : usr=7.33%, sys=32.81%, ctx=9749, majf=0, minf=133 00:20:52.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:52.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.098 issued rwts: total=83293,43180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.098 00:20:52.098 Run status group 0 (all jobs): 00:20:52.098 READ: bw=54.2MiB/s (56.8MB/s), 54.2MiB/s-54.2MiB/s (56.8MB/s-56.8MB/s), io=325MiB (341MB), run=6004-6004msec 00:20:52.098 WRITE: bw=32.8MiB/s (34.4MB/s), 32.8MiB/s-32.8MiB/s (34.4MB/s-34.4MB/s), io=169MiB (177MB), run=5138-5138msec 00:20:52.098 00:20:52.098 Disk stats (read/write): 00:20:52.098 nvme0n1: ios=82158/42305, merge=0/0, ticks=459925/195577, in_queue=655502, util=98.65% 00:20:52.098 15:08:07 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:20:52.098 15:08:07 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:52.098 15:08:07 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:20:52.098 15:08:07 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:20:52.098 15:08:07 -- target/multipath.sh@22 -- # local timeout=20 00:20:52.098 15:08:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:52.098 15:08:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:52.098 15:08:07 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:20:52.098 15:08:07 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:20:52.098 15:08:07 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:20:52.098 15:08:07 -- target/multipath.sh@22 -- # local timeout=20 00:20:52.098 15:08:07 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:52.098 15:08:07 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:52.098 15:08:07 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:20:52.098 15:08:07 -- target/multipath.sh@25 -- # sleep 1s 00:20:53.035 15:08:08 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:20:53.035 15:08:08 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:53.036 15:08:08 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:20:53.036 15:08:08 -- target/multipath.sh@113 -- # echo round-robin 00:20:53.036 15:08:08 -- target/multipath.sh@116 -- # fio_pid=74372 00:20:53.036 15:08:08 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:20:53.036 15:08:08 -- target/multipath.sh@118 -- # sleep 1 00:20:53.036 [global] 00:20:53.036 thread=1 00:20:53.036 invalidate=1 00:20:53.036 rw=randrw 00:20:53.036 time_based=1 00:20:53.036 runtime=6 00:20:53.036 ioengine=libaio 00:20:53.036 direct=1 00:20:53.036 bs=4096 00:20:53.036 iodepth=128 00:20:53.036 norandommap=0 00:20:53.036 numjobs=1 00:20:53.036 00:20:53.036 verify_dump=1 00:20:53.036 verify_backlog=512 00:20:53.036 verify_state_save=0 00:20:53.036 do_verify=1 00:20:53.036 verify=crc32c-intel 00:20:53.295 [job0] 00:20:53.295 filename=/dev/nvme0n1 00:20:53.295 Could not set queue depth (nvme0n1) 00:20:53.295 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:53.295 fio-3.35 00:20:53.295 Starting 1 thread 00:20:54.232 15:08:09 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:54.491 15:08:10 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:54.750 15:08:10 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:20:54.750 15:08:10 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:20:54.750 15:08:10 -- target/multipath.sh@22 -- # local timeout=20 00:20:54.750 15:08:10 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:54.750 15:08:10 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:54.750 15:08:10 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:54.750 15:08:10 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:20:54.750 15:08:10 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:20:54.750 15:08:10 -- target/multipath.sh@22 -- # local timeout=20 00:20:54.750 15:08:10 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:54.750 15:08:10 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:54.750 15:08:10 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:54.750 15:08:10 -- target/multipath.sh@25 -- # sleep 1s 00:20:55.687 15:08:11 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:20:55.687 15:08:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:55.687 15:08:11 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:55.687 15:08:11 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:55.976 15:08:11 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:55.976 15:08:11 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:20:55.976 15:08:11 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:20:55.976 15:08:11 -- target/multipath.sh@22 -- # local timeout=20 00:20:55.976 15:08:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:20:55.976 15:08:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:20:55.976 15:08:11 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:20:55.976 15:08:11 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:20:55.976 15:08:11 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:20:55.976 15:08:11 -- target/multipath.sh@22 -- # local timeout=20 00:20:55.976 15:08:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:20:55.976 15:08:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:55.976 15:08:11 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:55.976 15:08:11 -- target/multipath.sh@25 -- # sleep 1s 00:20:57.375 15:08:12 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:20:57.375 15:08:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:57.375 15:08:12 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:57.375 15:08:12 -- target/multipath.sh@132 -- # wait 74372 00:20:59.911 00:20:59.911 job0: (groupid=0, jobs=1): err= 0: pid=74402: Thu Apr 18 15:08:15 2024 00:20:59.911 read: IOPS=14.6k, BW=57.1MiB/s (59.9MB/s)(343MiB/6000msec) 00:20:59.911 slat (usec): min=3, max=4697, avg=32.76, stdev=139.35 00:20:59.911 clat (usec): min=252, max=15209, avg=6058.52, stdev=1247.84 00:20:59.911 lat (usec): min=289, max=15222, avg=6091.29, stdev=1255.31 00:20:59.911 clat percentiles (usec): 00:20:59.911 | 1.00th=[ 3064], 5.00th=[ 3982], 10.00th=[ 4490], 20.00th=[ 5211], 00:20:59.911 | 30.00th=[ 5538], 40.00th=[ 5800], 50.00th=[ 6063], 60.00th=[ 6325], 00:20:59.911 | 70.00th=[ 6587], 80.00th=[ 6849], 90.00th=[ 7308], 95.00th=[ 8094], 00:20:59.911 | 99.00th=[ 9765], 99.50th=[10290], 99.90th=[12125], 99.95th=[13042], 00:20:59.911 | 99.99th=[14877] 00:20:59.911 bw ( KiB/s): min=11168, max=47936, per=51.07%, avg=29882.45, stdev=11883.62, samples=11 00:20:59.911 iops : min= 2792, max=11984, avg=7470.55, stdev=2970.82, samples=11 00:20:59.911 write: IOPS=8995, BW=35.1MiB/s (36.8MB/s)(178MiB/5066msec); 0 zone resets 00:20:59.911 slat (usec): min=4, max=2786, avg=46.75, stdev=82.76 00:20:59.911 clat (usec): min=333, max=12835, avg=5014.96, stdev=1240.48 00:20:59.911 lat (usec): min=416, max=12861, avg=5061.70, stdev=1248.13 00:20:59.911 clat percentiles (usec): 00:20:59.911 | 1.00th=[ 2278], 5.00th=[ 2999], 10.00th=[ 3392], 20.00th=[ 3982], 00:20:59.911 | 30.00th=[ 4490], 40.00th=[ 4883], 50.00th=[ 5145], 60.00th=[ 5342], 00:20:59.911 | 70.00th=[ 5604], 80.00th=[ 5866], 90.00th=[ 6194], 95.00th=[ 6849], 00:20:59.911 | 99.00th=[ 8848], 99.50th=[ 9503], 99.90th=[10814], 99.95th=[11338], 00:20:59.911 | 99.99th=[12256] 00:20:59.911 bw ( KiB/s): min=11712, max=47056, per=83.31%, avg=29975.45, stdev=11408.62, samples=11 00:20:59.911 iops : min= 2928, max=11764, avg=7493.82, stdev=2852.10, samples=11 00:20:59.911 lat (usec) : 500=0.02%, 750=0.03%, 1000=0.02% 00:20:59.911 lat (msec) : 2=0.30%, 4=10.09%, 10=88.94%, 20=0.60% 00:20:59.911 cpu : usr=8.24%, sys=32.76%, ctx=10920, majf=0, minf=133 00:20:59.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:59.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:59.911 issued rwts: total=87763,45569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:59.911 00:20:59.911 Run status group 0 (all jobs): 00:20:59.911 READ: bw=57.1MiB/s (59.9MB/s), 57.1MiB/s-57.1MiB/s (59.9MB/s-59.9MB/s), io=343MiB (359MB), run=6000-6000msec 00:20:59.911 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=178MiB (187MB), run=5066-5066msec 00:20:59.911 00:20:59.911 Disk stats (read/write): 00:20:59.911 nvme0n1: ios=86710/44686, merge=0/0, ticks=460079/186797, in_queue=646876, util=98.50% 00:20:59.911 15:08:15 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:59.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:59.911 15:08:15 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:59.911 15:08:15 -- common/autotest_common.sh@1205 -- # local i=0 00:20:59.911 15:08:15 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:59.912 15:08:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:59.912 15:08:15 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:59.912 15:08:15 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:59.912 15:08:15 -- common/autotest_common.sh@1217 -- # return 0 00:20:59.912 15:08:15 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.912 15:08:15 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:20:59.912 15:08:15 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:20:59.912 15:08:15 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:20:59.912 15:08:15 -- target/multipath.sh@144 -- # nvmftestfini 00:20:59.912 15:08:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:59.912 15:08:15 -- nvmf/common.sh@117 -- # sync 00:20:59.912 15:08:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:59.912 15:08:15 -- nvmf/common.sh@120 -- # set +e 00:20:59.912 15:08:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:59.912 15:08:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:59.912 rmmod nvme_tcp 00:20:59.912 rmmod nvme_fabrics 00:20:59.912 rmmod nvme_keyring 00:20:59.912 15:08:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:59.912 15:08:15 -- nvmf/common.sh@124 -- # set -e 00:20:59.912 15:08:15 -- nvmf/common.sh@125 -- # return 0 00:20:59.912 15:08:15 -- nvmf/common.sh@478 -- # '[' -n 74083 ']' 00:20:59.912 15:08:15 -- nvmf/common.sh@479 -- # killprocess 74083 00:20:59.912 15:08:15 -- common/autotest_common.sh@936 -- # '[' -z 74083 ']' 00:20:59.912 15:08:15 -- common/autotest_common.sh@940 -- # kill -0 74083 00:20:59.912 15:08:15 -- common/autotest_common.sh@941 -- # uname 00:20:59.912 15:08:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:59.912 15:08:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74083 00:20:59.912 killing process with pid 74083 00:20:59.912 15:08:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:59.912 15:08:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:59.912 15:08:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74083' 00:20:59.912 15:08:15 -- common/autotest_common.sh@955 -- # kill 74083 00:20:59.912 15:08:15 -- common/autotest_common.sh@960 -- # wait 74083 00:21:00.172 15:08:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:00.172 15:08:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:00.172 15:08:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:00.172 15:08:15 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:00.172 15:08:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:00.172 15:08:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.172 15:08:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.172 15:08:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.172 15:08:15 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:00.172 ************************************ 00:21:00.172 END TEST nvmf_multipath 00:21:00.172 ************************************ 00:21:00.172 00:21:00.172 real 0m20.192s 00:21:00.172 user 1m17.287s 00:21:00.172 sys 0m8.665s 00:21:00.172 15:08:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:00.172 15:08:15 -- common/autotest_common.sh@10 -- # set +x 00:21:00.172 15:08:15 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:21:00.172 15:08:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:00.172 15:08:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:00.172 15:08:15 -- common/autotest_common.sh@10 -- # set +x 00:21:00.431 ************************************ 00:21:00.431 START TEST nvmf_zcopy 00:21:00.431 ************************************ 00:21:00.431 15:08:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:21:00.431 * Looking for test storage... 00:21:00.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:00.431 15:08:16 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:00.431 15:08:16 -- nvmf/common.sh@7 -- # uname -s 00:21:00.431 15:08:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.431 15:08:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.431 15:08:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.431 15:08:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.431 15:08:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.431 15:08:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.431 15:08:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.431 15:08:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.431 15:08:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.431 15:08:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.431 15:08:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:21:00.431 15:08:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:21:00.431 15:08:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.431 15:08:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.431 15:08:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:00.431 15:08:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.431 15:08:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:00.431 15:08:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.431 15:08:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.431 15:08:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.431 15:08:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.431 15:08:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.431 15:08:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.431 15:08:16 -- paths/export.sh@5 -- # export PATH 00:21:00.431 15:08:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.431 15:08:16 -- nvmf/common.sh@47 -- # : 0 00:21:00.431 15:08:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:00.431 15:08:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:00.431 15:08:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.431 15:08:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.431 15:08:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.431 15:08:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:00.431 15:08:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:00.431 15:08:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:00.431 15:08:16 -- target/zcopy.sh@12 -- # nvmftestinit 00:21:00.431 15:08:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:00.431 15:08:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.431 15:08:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:00.431 15:08:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:00.431 15:08:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:00.431 15:08:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.431 15:08:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.431 15:08:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.431 15:08:16 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:00.431 15:08:16 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:00.431 15:08:16 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:00.431 15:08:16 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:00.431 15:08:16 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:00.431 15:08:16 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:00.431 15:08:16 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.431 15:08:16 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.431 15:08:16 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:00.431 15:08:16 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:00.431 15:08:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:00.431 15:08:16 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:00.431 15:08:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:00.432 15:08:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.432 15:08:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:00.432 15:08:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:00.432 15:08:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:00.432 15:08:16 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:00.432 15:08:16 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:00.432 15:08:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:00.432 Cannot find device "nvmf_tgt_br" 00:21:00.432 15:08:16 -- nvmf/common.sh@155 -- # true 00:21:00.432 15:08:16 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:00.692 Cannot find device "nvmf_tgt_br2" 00:21:00.692 15:08:16 -- nvmf/common.sh@156 -- # true 00:21:00.692 15:08:16 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:00.692 15:08:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:00.692 Cannot find device "nvmf_tgt_br" 00:21:00.692 15:08:16 -- nvmf/common.sh@158 -- # true 00:21:00.692 15:08:16 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:00.692 Cannot find device "nvmf_tgt_br2" 00:21:00.692 15:08:16 -- nvmf/common.sh@159 -- # true 00:21:00.692 15:08:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:00.692 15:08:16 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:00.692 15:08:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:00.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:00.692 15:08:16 -- nvmf/common.sh@162 -- # true 00:21:00.692 15:08:16 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:00.692 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:00.692 15:08:16 -- nvmf/common.sh@163 -- # true 00:21:00.692 15:08:16 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:00.692 15:08:16 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:00.692 15:08:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:00.692 15:08:16 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:00.692 15:08:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:00.692 15:08:16 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:00.692 15:08:16 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:00.692 15:08:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:00.692 15:08:16 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:00.692 15:08:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:00.692 15:08:16 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:00.693 15:08:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:00.693 15:08:16 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:00.693 15:08:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:00.693 15:08:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:00.693 15:08:16 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:00.693 15:08:16 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:00.954 15:08:16 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:00.954 15:08:16 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:00.954 15:08:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:00.954 15:08:16 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:00.954 15:08:16 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:00.954 15:08:16 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:00.954 15:08:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:00.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:21:00.954 00:21:00.954 --- 10.0.0.2 ping statistics --- 00:21:00.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.954 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:21:00.954 15:08:16 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:00.954 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:00.954 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:21:00.954 00:21:00.954 --- 10.0.0.3 ping statistics --- 00:21:00.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.954 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:00.954 15:08:16 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:00.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:21:00.954 00:21:00.954 --- 10.0.0.1 ping statistics --- 00:21:00.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.954 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:21:00.954 15:08:16 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.954 15:08:16 -- nvmf/common.sh@422 -- # return 0 00:21:00.954 15:08:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:00.955 15:08:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.955 15:08:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:00.955 15:08:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:00.955 15:08:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.955 15:08:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:00.955 15:08:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:00.955 15:08:16 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:21:00.955 15:08:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:00.955 15:08:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:00.955 15:08:16 -- common/autotest_common.sh@10 -- # set +x 00:21:00.955 15:08:16 -- nvmf/common.sh@470 -- # nvmfpid=74684 00:21:00.955 15:08:16 -- nvmf/common.sh@471 -- # waitforlisten 74684 00:21:00.955 15:08:16 -- common/autotest_common.sh@817 -- # '[' -z 74684 ']' 00:21:00.955 15:08:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.955 15:08:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:00.955 15:08:16 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:00.955 15:08:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.955 15:08:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:00.955 15:08:16 -- common/autotest_common.sh@10 -- # set +x 00:21:00.955 [2024-04-18 15:08:16.581755] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:21:00.955 [2024-04-18 15:08:16.581858] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.214 [2024-04-18 15:08:16.725697] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.214 [2024-04-18 15:08:16.823483] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.214 [2024-04-18 15:08:16.823563] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.214 [2024-04-18 15:08:16.823574] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.214 [2024-04-18 15:08:16.823583] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.214 [2024-04-18 15:08:16.823591] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.214 [2024-04-18 15:08:16.823635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.782 15:08:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:01.782 15:08:17 -- common/autotest_common.sh@850 -- # return 0 00:21:01.782 15:08:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:01.782 15:08:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:01.782 15:08:17 -- common/autotest_common.sh@10 -- # set +x 00:21:02.041 15:08:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.041 15:08:17 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:21:02.041 15:08:17 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:21:02.041 15:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.041 15:08:17 -- common/autotest_common.sh@10 -- # set +x 00:21:02.041 [2024-04-18 15:08:17.524153] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.041 15:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.041 15:08:17 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:02.041 15:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.041 15:08:17 -- common/autotest_common.sh@10 -- # set +x 00:21:02.041 15:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.041 15:08:17 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.041 15:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.041 15:08:17 -- common/autotest_common.sh@10 -- # set +x 00:21:02.041 [2024-04-18 15:08:17.548272] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.041 15:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.041 15:08:17 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:02.041 15:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.041 15:08:17 -- common/autotest_common.sh@10 -- # set +x 00:21:02.041 15:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.042 15:08:17 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:21:02.042 15:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.042 15:08:17 -- common/autotest_common.sh@10 -- # set +x 00:21:02.042 malloc0 00:21:02.042 15:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.042 15:08:17 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:02.042 15:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.042 15:08:17 -- common/autotest_common.sh@10 -- # set +x 00:21:02.042 15:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.042 15:08:17 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:21:02.042 15:08:17 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:21:02.042 15:08:17 -- nvmf/common.sh@521 -- # config=() 00:21:02.042 15:08:17 -- nvmf/common.sh@521 -- # local subsystem config 00:21:02.042 15:08:17 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:02.042 15:08:17 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:02.042 { 00:21:02.042 "params": { 00:21:02.042 "name": "Nvme$subsystem", 00:21:02.042 "trtype": "$TEST_TRANSPORT", 00:21:02.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:02.042 "adrfam": "ipv4", 00:21:02.042 "trsvcid": "$NVMF_PORT", 00:21:02.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:02.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:02.042 "hdgst": ${hdgst:-false}, 00:21:02.042 "ddgst": ${ddgst:-false} 00:21:02.042 }, 00:21:02.042 "method": "bdev_nvme_attach_controller" 00:21:02.042 } 00:21:02.042 EOF 00:21:02.042 )") 00:21:02.042 15:08:17 -- nvmf/common.sh@543 -- # cat 00:21:02.042 15:08:17 -- nvmf/common.sh@545 -- # jq . 00:21:02.042 15:08:17 -- nvmf/common.sh@546 -- # IFS=, 00:21:02.042 15:08:17 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:02.042 "params": { 00:21:02.042 "name": "Nvme1", 00:21:02.042 "trtype": "tcp", 00:21:02.042 "traddr": "10.0.0.2", 00:21:02.042 "adrfam": "ipv4", 00:21:02.042 "trsvcid": "4420", 00:21:02.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.042 "hdgst": false, 00:21:02.042 "ddgst": false 00:21:02.042 }, 00:21:02.042 "method": "bdev_nvme_attach_controller" 00:21:02.042 }' 00:21:02.042 [2024-04-18 15:08:17.655666] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:21:02.042 [2024-04-18 15:08:17.655739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74734 ] 00:21:02.301 [2024-04-18 15:08:17.801967] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.301 [2024-04-18 15:08:17.893705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.560 Running I/O for 10 seconds... 00:21:12.541 00:21:12.541 Latency(us) 00:21:12.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.541 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:21:12.541 Verification LBA range: start 0x0 length 0x1000 00:21:12.541 Nvme1n1 : 10.01 7992.60 62.44 0.00 0.00 15970.97 2710.93 25688.01 00:21:12.541 =================================================================================================================== 00:21:12.541 Total : 7992.60 62.44 0.00 0.00 15970.97 2710.93 25688.01 00:21:12.801 15:08:28 -- target/zcopy.sh@39 -- # perfpid=74859 00:21:12.801 15:08:28 -- target/zcopy.sh@41 -- # xtrace_disable 00:21:12.801 15:08:28 -- common/autotest_common.sh@10 -- # set +x 00:21:12.801 15:08:28 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:21:12.801 15:08:28 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:21:12.801 15:08:28 -- nvmf/common.sh@521 -- # config=() 00:21:12.801 15:08:28 -- nvmf/common.sh@521 -- # local subsystem config 00:21:12.801 15:08:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:12.801 15:08:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:12.801 { 00:21:12.801 "params": { 00:21:12.801 "name": "Nvme$subsystem", 00:21:12.801 "trtype": "$TEST_TRANSPORT", 00:21:12.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.801 "adrfam": "ipv4", 00:21:12.801 "trsvcid": "$NVMF_PORT", 00:21:12.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.801 "hdgst": ${hdgst:-false}, 00:21:12.801 "ddgst": ${ddgst:-false} 00:21:12.801 }, 00:21:12.801 "method": "bdev_nvme_attach_controller" 00:21:12.801 } 00:21:12.801 EOF 00:21:12.801 )") 00:21:12.801 [2024-04-18 15:08:28.315346] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.801 [2024-04-18 15:08:28.315390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.801 15:08:28 -- nvmf/common.sh@543 -- # cat 00:21:12.801 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.801 15:08:28 -- nvmf/common.sh@545 -- # jq . 00:21:12.801 15:08:28 -- nvmf/common.sh@546 -- # IFS=, 00:21:12.801 [2024-04-18 15:08:28.327284] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.801 15:08:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:12.801 "params": { 00:21:12.801 "name": "Nvme1", 00:21:12.801 "trtype": "tcp", 00:21:12.801 "traddr": "10.0.0.2", 00:21:12.801 "adrfam": "ipv4", 00:21:12.801 "trsvcid": "4420", 00:21:12.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:12.801 "hdgst": false, 00:21:12.801 "ddgst": false 00:21:12.801 }, 00:21:12.801 "method": "bdev_nvme_attach_controller" 00:21:12.801 }' 00:21:12.801 [2024-04-18 15:08:28.327304] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.801 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.801 [2024-04-18 15:08:28.339259] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.801 [2024-04-18 15:08:28.339279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.801 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.801 [2024-04-18 15:08:28.351238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.801 [2024-04-18 15:08:28.351258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.801 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.801 [2024-04-18 15:08:28.359225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.801 [2024-04-18 15:08:28.359245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.801 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.801 [2024-04-18 15:08:28.364816] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:21:12.801 [2024-04-18 15:08:28.364885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74859 ] 00:21:12.801 [2024-04-18 15:08:28.367218] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.801 [2024-04-18 15:08:28.367238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.801 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.801 [2024-04-18 15:08:28.375202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.801 [2024-04-18 15:08:28.375225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.802 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.802 [2024-04-18 15:08:28.383190] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.802 [2024-04-18 15:08:28.383210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.802 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.802 [2024-04-18 15:08:28.391178] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.802 [2024-04-18 15:08:28.391197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.802 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.802 [2024-04-18 15:08:28.399167] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.802 [2024-04-18 15:08:28.399189] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.802 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.802 [2024-04-18 15:08:28.411153] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.802 [2024-04-18 15:08:28.411176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.802 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.802 [2024-04-18 15:08:28.423140] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.802 [2024-04-18 15:08:28.423167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.802 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.802 [2024-04-18 15:08:28.435121] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.802 [2024-04-18 15:08:28.435145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.802 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.802 [2024-04-18 15:08:28.447104] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.802 [2024-04-18 15:08:28.447127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.802 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.802 [2024-04-18 15:08:28.459088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.802 [2024-04-18 15:08:28.459110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.802 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.802 [2024-04-18 15:08:28.471070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.802 [2024-04-18 15:08:28.471093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.802 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.802 [2024-04-18 15:08:28.483054] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.802 [2024-04-18 15:08:28.483078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.802 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.802 [2024-04-18 15:08:28.495039] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:12.802 [2024-04-18 15:08:28.495063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:12.802 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:12.802 [2024-04-18 15:08:28.504819] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.062 [2024-04-18 15:08:28.507022] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.062 [2024-04-18 15:08:28.507042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.062 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.062 [2024-04-18 15:08:28.519009] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.062 [2024-04-18 15:08:28.519038] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.062 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.062 [2024-04-18 15:08:28.530988] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.062 [2024-04-18 15:08:28.531011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.062 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.062 [2024-04-18 15:08:28.542972] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.062 [2024-04-18 15:08:28.542998] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.062 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.554960] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.554986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.063 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.566944] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.566970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.063 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.578924] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.578951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.063 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.590909] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.590936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.063 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.599286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.063 [2024-04-18 15:08:28.602896] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.602924] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.063 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.614893] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.614928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.063 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.626876] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.626900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.063 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.642835] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.642863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.063 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.658815] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.658840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.063 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.674793] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.674818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.063 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.690761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.690785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.063 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.706745] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.706768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.063 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.722759] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.722804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.063 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.738731] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.738772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.063 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.750718] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.750757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.063 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.063 [2024-04-18 15:08:28.762695] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.063 [2024-04-18 15:08:28.762729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:28.774687] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.774724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 Running I/O for 5 seconds... 00:21:13.323 [2024-04-18 15:08:28.786652] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.786680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:28.802675] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.802723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:28.820356] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.820394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:28.835299] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.835335] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:28.851411] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.851449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:28.866098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.866141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:28.877115] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.877154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:28.892361] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.892397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:28.908589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.908624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:28.919822] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.919857] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:28.935266] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.935312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:28.951172] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.951223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:28.967307] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.967358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:28.982261] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.982306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:28 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:28.998354] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:28.998398] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:29.013103] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:29.013144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.323 [2024-04-18 15:08:29.023889] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.323 [2024-04-18 15:08:29.023930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.323 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.582 [2024-04-18 15:08:29.039327] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.582 [2024-04-18 15:08:29.039370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.582 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.582 [2024-04-18 15:08:29.054856] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.582 [2024-04-18 15:08:29.054896] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.582 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.582 [2024-04-18 15:08:29.070208] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.582 [2024-04-18 15:08:29.070246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.582 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.582 [2024-04-18 15:08:29.086064] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.582 [2024-04-18 15:08:29.086100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.582 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.582 [2024-04-18 15:08:29.101179] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.582 [2024-04-18 15:08:29.101221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.582 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.582 [2024-04-18 15:08:29.117137] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.582 [2024-04-18 15:08:29.117180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.583 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.583 [2024-04-18 15:08:29.131949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.583 [2024-04-18 15:08:29.131988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.583 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.583 [2024-04-18 15:08:29.148156] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.583 [2024-04-18 15:08:29.148198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.583 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.583 [2024-04-18 15:08:29.159361] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.583 [2024-04-18 15:08:29.159404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.583 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.583 [2024-04-18 15:08:29.174848] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.583 [2024-04-18 15:08:29.174888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.583 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.583 [2024-04-18 15:08:29.190145] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.583 [2024-04-18 15:08:29.190187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.583 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.583 [2024-04-18 15:08:29.204483] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.583 [2024-04-18 15:08:29.204520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.583 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.583 [2024-04-18 15:08:29.215565] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.583 [2024-04-18 15:08:29.215600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.583 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.583 [2024-04-18 15:08:29.230763] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.583 [2024-04-18 15:08:29.230796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.583 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.583 [2024-04-18 15:08:29.249346] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.583 [2024-04-18 15:08:29.249372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.583 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.583 [2024-04-18 15:08:29.264620] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.583 [2024-04-18 15:08:29.264653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.583 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.583 [2024-04-18 15:08:29.283829] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.583 [2024-04-18 15:08:29.283863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.583 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.299290] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.299323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.318510] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.318554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.333639] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.333671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.353231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.353266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.368604] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.368639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.383997] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.384031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.399241] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.399275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.415083] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.415118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.431201] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.431237] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.446351] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.446386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.461467] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.461501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.475520] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.475563] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.493501] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.493546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.508853] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.508887] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.523855] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.523889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:13.843 [2024-04-18 15:08:29.538797] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:13.843 [2024-04-18 15:08:29.538832] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:13.843 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.554384] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.554419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.569696] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.569729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.585232] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.585269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.600320] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.600353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.611108] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.611141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.626456] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.626493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.641925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.641961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.657063] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.657097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.677432] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.677467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.691547] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.691580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.706134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.706169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.721783] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.721817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.740025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.740061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.754931] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.754966] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.766290] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.766326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.781658] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.781700] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.104 [2024-04-18 15:08:29.797531] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.104 [2024-04-18 15:08:29.797575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.104 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.365 [2024-04-18 15:08:29.812966] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.365 [2024-04-18 15:08:29.812998] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.365 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.365 [2024-04-18 15:08:29.827864] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.365 [2024-04-18 15:08:29.827897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.365 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.365 [2024-04-18 15:08:29.843371] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.365 [2024-04-18 15:08:29.843411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.365 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.365 [2024-04-18 15:08:29.858597] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.365 [2024-04-18 15:08:29.858635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.365 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.365 [2024-04-18 15:08:29.873724] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.365 [2024-04-18 15:08:29.873763] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.365 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.365 [2024-04-18 15:08:29.889963] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.365 [2024-04-18 15:08:29.890002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.365 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.365 [2024-04-18 15:08:29.900733] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.365 [2024-04-18 15:08:29.900771] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.365 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.365 [2024-04-18 15:08:29.916233] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.365 [2024-04-18 15:08:29.916269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.365 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.365 [2024-04-18 15:08:29.932023] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.365 [2024-04-18 15:08:29.932061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.365 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.365 [2024-04-18 15:08:29.946735] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.366 [2024-04-18 15:08:29.946770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.366 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.366 [2024-04-18 15:08:29.958049] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.366 [2024-04-18 15:08:29.958083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.366 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.366 [2024-04-18 15:08:29.973590] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.366 [2024-04-18 15:08:29.973625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.366 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.366 [2024-04-18 15:08:29.989126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.366 [2024-04-18 15:08:29.989163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.366 2024/04/18 15:08:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.366 [2024-04-18 15:08:30.004627] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.366 [2024-04-18 15:08:30.004664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.366 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.366 [2024-04-18 15:08:30.020493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.366 [2024-04-18 15:08:30.020528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.366 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.366 [2024-04-18 15:08:30.034918] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.366 [2024-04-18 15:08:30.034953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.366 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.366 [2024-04-18 15:08:30.049689] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.366 [2024-04-18 15:08:30.049724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.366 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.366 [2024-04-18 15:08:30.064365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.366 [2024-04-18 15:08:30.064400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.366 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.625 [2024-04-18 15:08:30.079415] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.079459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.093981] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.094020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.105375] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.105411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.120891] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.120925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.136199] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.136232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.151314] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.151349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.162800] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.162833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.177822] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.177858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.189171] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.189207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.207860] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.207894] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.222980] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.223013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.238865] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.238898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.253616] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.253649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.264683] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.264710] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.283449] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.283481] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.298842] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.298873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.626 [2024-04-18 15:08:30.318120] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.626 [2024-04-18 15:08:30.318152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.626 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.333187] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.333221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.347604] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.347635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.358777] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.358808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.373978] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.374010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.393480] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.393514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.408843] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.408875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.425002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.425036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.440726] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.440759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.454759] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.454790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.469572] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.469602] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.485148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.485182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.500334] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.500367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.519035] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.519068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.534270] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.534305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.550369] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.550405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.568181] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.568216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:14.886 [2024-04-18 15:08:30.582961] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:14.886 [2024-04-18 15:08:30.582995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:14.886 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.178 [2024-04-18 15:08:30.597757] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.178 [2024-04-18 15:08:30.597788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.178 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.178 [2024-04-18 15:08:30.617634] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.178 [2024-04-18 15:08:30.617668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.178 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.178 [2024-04-18 15:08:30.632714] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.178 [2024-04-18 15:08:30.632747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.178 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.178 [2024-04-18 15:08:30.647515] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.178 [2024-04-18 15:08:30.647560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.178 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.178 [2024-04-18 15:08:30.662129] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.178 [2024-04-18 15:08:30.662162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.178 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.178 [2024-04-18 15:08:30.673025] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.178 [2024-04-18 15:08:30.673058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.179 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.179 [2024-04-18 15:08:30.688164] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.179 [2024-04-18 15:08:30.688196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.179 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.179 [2024-04-18 15:08:30.699474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.179 [2024-04-18 15:08:30.699505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.179 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.179 [2024-04-18 15:08:30.717656] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.179 [2024-04-18 15:08:30.717689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.179 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.179 [2024-04-18 15:08:30.732453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.179 [2024-04-18 15:08:30.732489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.179 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.179 [2024-04-18 15:08:30.746980] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.179 [2024-04-18 15:08:30.747015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.179 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.179 [2024-04-18 15:08:30.763107] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.179 [2024-04-18 15:08:30.763143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.179 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.179 [2024-04-18 15:08:30.777973] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.179 [2024-04-18 15:08:30.778006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.179 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.179 [2024-04-18 15:08:30.793866] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.179 [2024-04-18 15:08:30.793900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.179 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.179 [2024-04-18 15:08:30.807781] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.179 [2024-04-18 15:08:30.807814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.179 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.179 [2024-04-18 15:08:30.823204] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.179 [2024-04-18 15:08:30.823238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.179 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.179 [2024-04-18 15:08:30.841950] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.179 [2024-04-18 15:08:30.841985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.179 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.179 [2024-04-18 15:08:30.857222] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.179 [2024-04-18 15:08:30.857261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.179 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.179 [2024-04-18 15:08:30.872992] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.179 [2024-04-18 15:08:30.873029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.179 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.439 [2024-04-18 15:08:30.886656] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.439 [2024-04-18 15:08:30.886690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.439 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.439 [2024-04-18 15:08:30.901705] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.439 [2024-04-18 15:08:30.901740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.439 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.439 [2024-04-18 15:08:30.916070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.439 [2024-04-18 15:08:30.916116] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.439 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.439 [2024-04-18 15:08:30.932002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.439 [2024-04-18 15:08:30.932040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.439 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.439 [2024-04-18 15:08:30.949962] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.439 [2024-04-18 15:08:30.950001] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.439 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.439 [2024-04-18 15:08:30.964935] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.439 [2024-04-18 15:08:30.964971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.439 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.439 [2024-04-18 15:08:30.979455] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.439 [2024-04-18 15:08:30.979488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.439 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.439 [2024-04-18 15:08:30.993634] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.439 [2024-04-18 15:08:30.993668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.439 2024/04/18 15:08:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.439 [2024-04-18 15:08:31.011042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.439 [2024-04-18 15:08:31.011075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.439 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.439 [2024-04-18 15:08:31.025644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.439 [2024-04-18 15:08:31.025677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.439 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.439 [2024-04-18 15:08:31.041847] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.439 [2024-04-18 15:08:31.041892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.439 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.439 [2024-04-18 15:08:31.058098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.440 [2024-04-18 15:08:31.058132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.440 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.440 [2024-04-18 15:08:31.073431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.440 [2024-04-18 15:08:31.073465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.440 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.440 [2024-04-18 15:08:31.089360] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.440 [2024-04-18 15:08:31.089394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.440 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.440 [2024-04-18 15:08:31.104512] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.440 [2024-04-18 15:08:31.104555] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.440 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.440 [2024-04-18 15:08:31.123430] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.440 [2024-04-18 15:08:31.123481] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.440 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.440 [2024-04-18 15:08:31.137867] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.440 [2024-04-18 15:08:31.137905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.440 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.149653] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.149685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.164615] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.164647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.184602] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.184634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.199744] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.199775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.215214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.215247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.233945] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.233977] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.251516] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.251556] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.269834] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.269882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.285357] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.285396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.304986] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.305022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.323456] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.323491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.338833] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.338867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.354872] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.354904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.370597] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.370629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.385695] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.385733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.700 [2024-04-18 15:08:31.400900] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.700 [2024-04-18 15:08:31.400935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.700 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.960 [2024-04-18 15:08:31.418759] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.960 [2024-04-18 15:08:31.418796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.960 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.960 [2024-04-18 15:08:31.436258] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.960 [2024-04-18 15:08:31.436297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.960 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.960 [2024-04-18 15:08:31.454838] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.960 [2024-04-18 15:08:31.454891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.960 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.960 [2024-04-18 15:08:31.470263] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.960 [2024-04-18 15:08:31.470312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.960 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.960 [2024-04-18 15:08:31.485989] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.960 [2024-04-18 15:08:31.486025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.960 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.960 [2024-04-18 15:08:31.500513] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.960 [2024-04-18 15:08:31.500553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.960 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.960 [2024-04-18 15:08:31.516030] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.960 [2024-04-18 15:08:31.516062] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.960 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.960 [2024-04-18 15:08:31.530532] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.960 [2024-04-18 15:08:31.530571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.960 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.960 [2024-04-18 15:08:31.541770] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.960 [2024-04-18 15:08:31.541801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.960 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.960 [2024-04-18 15:08:31.559952] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.960 [2024-04-18 15:08:31.559984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.960 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.960 [2024-04-18 15:08:31.575171] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.960 [2024-04-18 15:08:31.575203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.960 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.960 [2024-04-18 15:08:31.591231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.960 [2024-04-18 15:08:31.591264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.960 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.960 [2024-04-18 15:08:31.605335] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.960 [2024-04-18 15:08:31.605367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.960 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.961 [2024-04-18 15:08:31.616441] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.961 [2024-04-18 15:08:31.616473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.961 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.961 [2024-04-18 15:08:31.634423] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.961 [2024-04-18 15:08:31.634455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.961 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.961 [2024-04-18 15:08:31.649001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.961 [2024-04-18 15:08:31.649032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:15.961 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:15.961 [2024-04-18 15:08:31.663837] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:15.961 [2024-04-18 15:08:31.663868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.221 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.221 [2024-04-18 15:08:31.675065] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.221 [2024-04-18 15:08:31.675096] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.221 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.221 [2024-04-18 15:08:31.693111] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.221 [2024-04-18 15:08:31.693158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.221 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.221 [2024-04-18 15:08:31.711780] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.221 [2024-04-18 15:08:31.711835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.221 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.221 [2024-04-18 15:08:31.730406] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.221 [2024-04-18 15:08:31.730454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.221 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.221 [2024-04-18 15:08:31.748893] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.221 [2024-04-18 15:08:31.748930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.221 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.221 [2024-04-18 15:08:31.764129] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.221 [2024-04-18 15:08:31.764157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.221 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.222 [2024-04-18 15:08:31.778426] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.222 [2024-04-18 15:08:31.778454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.222 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.222 [2024-04-18 15:08:31.789436] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.222 [2024-04-18 15:08:31.789464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.222 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.222 [2024-04-18 15:08:31.807951] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.222 [2024-04-18 15:08:31.807977] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.222 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.222 [2024-04-18 15:08:31.826197] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.222 [2024-04-18 15:08:31.826225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.222 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.222 [2024-04-18 15:08:31.844331] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.222 [2024-04-18 15:08:31.844362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.222 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.222 [2024-04-18 15:08:31.859456] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.222 [2024-04-18 15:08:31.859482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.222 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.222 [2024-04-18 15:08:31.870507] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.222 [2024-04-18 15:08:31.870534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.222 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.222 [2024-04-18 15:08:31.885742] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.222 [2024-04-18 15:08:31.885774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.222 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.222 [2024-04-18 15:08:31.901740] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.222 [2024-04-18 15:08:31.901773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.222 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.222 [2024-04-18 15:08:31.919495] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.222 [2024-04-18 15:08:31.919530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.222 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.482 [2024-04-18 15:08:31.934215] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.482 [2024-04-18 15:08:31.934248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.482 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.482 [2024-04-18 15:08:31.950445] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.482 [2024-04-18 15:08:31.950483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.482 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.482 [2024-04-18 15:08:31.966471] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.482 [2024-04-18 15:08:31.966505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.482 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.482 [2024-04-18 15:08:31.985638] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.482 [2024-04-18 15:08:31.985684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.482 2024/04/18 15:08:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.482 [2024-04-18 15:08:32.001568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.482 [2024-04-18 15:08:32.001605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.482 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.482 [2024-04-18 15:08:32.018116] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.482 [2024-04-18 15:08:32.018153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.482 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.482 [2024-04-18 15:08:32.037389] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.482 [2024-04-18 15:08:32.037434] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.482 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.482 [2024-04-18 15:08:32.052861] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.482 [2024-04-18 15:08:32.052901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.482 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.482 [2024-04-18 15:08:32.072209] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.482 [2024-04-18 15:08:32.072246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.482 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.482 [2024-04-18 15:08:32.086712] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.482 [2024-04-18 15:08:32.086744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.482 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.482 [2024-04-18 15:08:32.104296] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.482 [2024-04-18 15:08:32.104326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.482 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.482 [2024-04-18 15:08:32.118827] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.482 [2024-04-18 15:08:32.118856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.482 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.482 [2024-04-18 15:08:32.134927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.482 [2024-04-18 15:08:32.134960] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.483 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.483 [2024-04-18 15:08:32.152378] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.483 [2024-04-18 15:08:32.152410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.483 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.483 [2024-04-18 15:08:32.167377] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.483 [2024-04-18 15:08:32.167405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.483 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.741 [2024-04-18 15:08:32.186851] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.741 [2024-04-18 15:08:32.186880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.741 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.202272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.202300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.742 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.222581] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.222606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.742 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.240254] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.240285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.742 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.255148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.255179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.742 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.271263] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.271294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.742 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.286114] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.286144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.742 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.302069] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.302097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.742 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.317293] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.317321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.742 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.336208] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.336237] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.742 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.350830] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.350860] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.742 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.366188] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.366219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.742 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.385070] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.385104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.742 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.399715] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.399746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.742 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.415959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.415988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.742 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.430643] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.430670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:16.742 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:16.742 [2024-04-18 15:08:32.445023] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:16.742 [2024-04-18 15:08:32.445051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.463746] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.463776] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.478811] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.478844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.494322] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.494358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.509459] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.509492] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.525332] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.525362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.540525] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.540562] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.556493] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.556520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.571987] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.572017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.587559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.587592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.602286] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.602317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.622337] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.622368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.637418] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.637452] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.653057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.653094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.671146] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.671182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.686834] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.686873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.003 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.003 [2024-04-18 15:08:32.703150] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.003 [2024-04-18 15:08:32.703188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.263 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.263 [2024-04-18 15:08:32.722788] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.263 [2024-04-18 15:08:32.722824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.263 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.263 [2024-04-18 15:08:32.738424] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.263 [2024-04-18 15:08:32.738459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.263 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.263 [2024-04-18 15:08:32.753839] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.263 [2024-04-18 15:08:32.753871] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.263 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.263 [2024-04-18 15:08:32.769153] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.263 [2024-04-18 15:08:32.769189] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.263 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.263 [2024-04-18 15:08:32.784280] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.263 [2024-04-18 15:08:32.784314] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.263 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.263 [2024-04-18 15:08:32.798161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.263 [2024-04-18 15:08:32.798198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.263 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.263 [2024-04-18 15:08:32.813467] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.263 [2024-04-18 15:08:32.813503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.263 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.263 [2024-04-18 15:08:32.828346] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.263 [2024-04-18 15:08:32.828386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.263 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.263 [2024-04-18 15:08:32.842515] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.263 [2024-04-18 15:08:32.842560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.263 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.263 [2024-04-18 15:08:32.853735] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.263 [2024-04-18 15:08:32.853768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.263 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.263 [2024-04-18 15:08:32.871956] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.263 [2024-04-18 15:08:32.872014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.264 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.264 [2024-04-18 15:08:32.887655] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.264 [2024-04-18 15:08:32.887703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.264 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.264 [2024-04-18 15:08:32.902750] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.264 [2024-04-18 15:08:32.902786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.264 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.264 [2024-04-18 15:08:32.917908] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.264 [2024-04-18 15:08:32.917940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.264 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.264 [2024-04-18 15:08:32.932949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.264 [2024-04-18 15:08:32.932982] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.264 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.264 [2024-04-18 15:08:32.947313] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.264 [2024-04-18 15:08:32.947347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.264 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.264 [2024-04-18 15:08:32.962736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.264 [2024-04-18 15:08:32.962766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.264 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.523 [2024-04-18 15:08:32.978846] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.523 [2024-04-18 15:08:32.978876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.523 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.523 [2024-04-18 15:08:32.993568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.523 [2024-04-18 15:08:32.993600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.523 2024/04/18 15:08:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.524 [2024-04-18 15:08:33.009820] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.524 [2024-04-18 15:08:33.009856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.524 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.524 [2024-04-18 15:08:33.026573] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.524 [2024-04-18 15:08:33.026596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.524 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.524 [2024-04-18 15:08:33.041985] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.524 [2024-04-18 15:08:33.042017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.524 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.524 [2024-04-18 15:08:33.056623] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.524 [2024-04-18 15:08:33.056653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.524 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.524 [2024-04-18 15:08:33.067901] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.524 [2024-04-18 15:08:33.067934] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.524 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.524 [2024-04-18 15:08:33.082805] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.524 [2024-04-18 15:08:33.082841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.524 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.524 [2024-04-18 15:08:33.096440] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.524 [2024-04-18 15:08:33.096475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.524 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.524 [2024-04-18 15:08:33.111901] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.524 [2024-04-18 15:08:33.111935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.524 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.524 [2024-04-18 15:08:33.128050] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.524 [2024-04-18 15:08:33.128082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.524 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.524 [2024-04-18 15:08:33.139559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.524 [2024-04-18 15:08:33.139592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.524 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.524 [2024-04-18 15:08:33.154903] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.524 [2024-04-18 15:08:33.154936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.524 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.524 [2024-04-18 15:08:33.170470] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.524 [2024-04-18 15:08:33.170503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.524 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.524 [2024-04-18 15:08:33.185459] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.524 [2024-04-18 15:08:33.185493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.524 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.524 [2024-04-18 15:08:33.200198] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.524 [2024-04-18 15:08:33.200229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.524 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.524 [2024-04-18 15:08:33.215321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.524 [2024-04-18 15:08:33.215354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.524 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.230321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.230355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.246056] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.246086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.260625] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.260651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.275755] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.275788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.290796] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.290849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.306994] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.307049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.325001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.325053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.340504] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.340549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.356224] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.356257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.371287] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.371323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.386354] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.386389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.400277] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.400310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.415555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.415580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.431275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.431306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.446041] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.446073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.783 [2024-04-18 15:08:33.461658] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.783 [2024-04-18 15:08:33.461689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.783 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:17.784 [2024-04-18 15:08:33.480036] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:17.784 [2024-04-18 15:08:33.480087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:17.784 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.043 [2024-04-18 15:08:33.495859] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.043 [2024-04-18 15:08:33.495915] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.043 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.043 [2024-04-18 15:08:33.514170] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.043 [2024-04-18 15:08:33.514210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.043 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.043 [2024-04-18 15:08:33.528751] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.043 [2024-04-18 15:08:33.528789] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.043 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.043 [2024-04-18 15:08:33.544813] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.043 [2024-04-18 15:08:33.544847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.043 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.043 [2024-04-18 15:08:33.559404] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.043 [2024-04-18 15:08:33.559437] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.043 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.043 [2024-04-18 15:08:33.570689] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.043 [2024-04-18 15:08:33.570719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.043 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.043 [2024-04-18 15:08:33.585752] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.043 [2024-04-18 15:08:33.585785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.043 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.044 [2024-04-18 15:08:33.599877] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.044 [2024-04-18 15:08:33.599906] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.044 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.044 [2024-04-18 15:08:33.615203] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.044 [2024-04-18 15:08:33.615235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.044 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.044 [2024-04-18 15:08:33.633194] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.044 [2024-04-18 15:08:33.633230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.044 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.044 [2024-04-18 15:08:33.647588] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.044 [2024-04-18 15:08:33.647618] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.044 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.044 [2024-04-18 15:08:33.662193] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.044 [2024-04-18 15:08:33.662224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.044 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.044 [2024-04-18 15:08:33.677311] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.044 [2024-04-18 15:08:33.677342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.044 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.044 [2024-04-18 15:08:33.692357] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.044 [2024-04-18 15:08:33.692388] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.044 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.044 [2024-04-18 15:08:33.708221] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.044 [2024-04-18 15:08:33.708251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.044 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.044 [2024-04-18 15:08:33.726431] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.044 [2024-04-18 15:08:33.726462] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.044 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.044 [2024-04-18 15:08:33.741904] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.044 [2024-04-18 15:08:33.741935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.044 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.303 [2024-04-18 15:08:33.757487] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.303 [2024-04-18 15:08:33.757515] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.303 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.303 [2024-04-18 15:08:33.772717] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.303 [2024-04-18 15:08:33.772764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.303 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.303 00:21:18.303 Latency(us) 00:21:18.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.303 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:21:18.303 Nvme1n1 : 5.01 15604.36 121.91 0.00 0.00 8194.93 3816.35 18318.50 00:21:18.303 =================================================================================================================== 00:21:18.303 Total : 15604.36 121.91 0.00 0.00 8194.93 3816.35 18318.50 00:21:18.304 [2024-04-18 15:08:33.788472] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.304 [2024-04-18 15:08:33.788508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.304 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.304 [2024-04-18 15:08:33.800436] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.304 [2024-04-18 15:08:33.800460] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.304 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.304 [2024-04-18 15:08:33.812409] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.304 [2024-04-18 15:08:33.812423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.304 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.304 [2024-04-18 15:08:33.824395] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.304 [2024-04-18 15:08:33.824418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.304 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.304 [2024-04-18 15:08:33.840361] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.304 [2024-04-18 15:08:33.840381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.304 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.304 [2024-04-18 15:08:33.856344] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.304 [2024-04-18 15:08:33.856367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.304 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.304 [2024-04-18 15:08:33.872316] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.304 [2024-04-18 15:08:33.872336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.304 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.304 [2024-04-18 15:08:33.888296] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.304 [2024-04-18 15:08:33.888320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.304 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.304 [2024-04-18 15:08:33.904278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.304 [2024-04-18 15:08:33.904304] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.304 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.304 [2024-04-18 15:08:33.920250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.304 [2024-04-18 15:08:33.920269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.304 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.304 [2024-04-18 15:08:33.936228] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.304 [2024-04-18 15:08:33.936240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.304 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.304 [2024-04-18 15:08:33.952218] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.304 [2024-04-18 15:08:33.952249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.304 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.304 [2024-04-18 15:08:33.968191] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.304 [2024-04-18 15:08:33.968215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.304 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.304 [2024-04-18 15:08:33.984190] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.304 [2024-04-18 15:08:33.984213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.304 2024/04/18 15:08:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.304 [2024-04-18 15:08:34.000188] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.304 [2024-04-18 15:08:34.000207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.304 2024/04/18 15:08:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.564 [2024-04-18 15:08:34.016184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:21:18.564 [2024-04-18 15:08:34.016199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:18.564 2024/04/18 15:08:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:18.564 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74859) - No such process 00:21:18.564 15:08:34 -- target/zcopy.sh@49 -- # wait 74859 00:21:18.564 15:08:34 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:18.564 15:08:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.564 15:08:34 -- common/autotest_common.sh@10 -- # set +x 00:21:18.564 15:08:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.564 15:08:34 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:18.564 15:08:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.564 15:08:34 -- common/autotest_common.sh@10 -- # set +x 00:21:18.564 delay0 00:21:18.564 15:08:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.564 15:08:34 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:21:18.564 15:08:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.564 15:08:34 -- common/autotest_common.sh@10 -- # set +x 00:21:18.564 15:08:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.564 15:08:34 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:21:18.564 [2024-04-18 15:08:34.252013] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:21:25.152 Initializing NVMe Controllers 00:21:25.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:25.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:25.152 Initialization complete. Launching workers. 00:21:25.152 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 108 00:21:25.152 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 395, failed to submit 33 00:21:25.152 success 226, unsuccess 169, failed 0 00:21:25.152 15:08:40 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:21:25.152 15:08:40 -- target/zcopy.sh@60 -- # nvmftestfini 00:21:25.152 15:08:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:25.152 15:08:40 -- nvmf/common.sh@117 -- # sync 00:21:25.152 15:08:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:25.152 15:08:40 -- nvmf/common.sh@120 -- # set +e 00:21:25.152 15:08:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:25.152 15:08:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:25.152 rmmod nvme_tcp 00:21:25.152 rmmod nvme_fabrics 00:21:25.152 rmmod nvme_keyring 00:21:25.152 15:08:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:25.152 15:08:40 -- nvmf/common.sh@124 -- # set -e 00:21:25.152 15:08:40 -- nvmf/common.sh@125 -- # return 0 00:21:25.152 15:08:40 -- nvmf/common.sh@478 -- # '[' -n 74684 ']' 00:21:25.152 15:08:40 -- nvmf/common.sh@479 -- # killprocess 74684 00:21:25.152 15:08:40 -- common/autotest_common.sh@936 -- # '[' -z 74684 ']' 00:21:25.152 15:08:40 -- common/autotest_common.sh@940 -- # kill -0 74684 00:21:25.152 15:08:40 -- common/autotest_common.sh@941 -- # uname 00:21:25.152 15:08:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:25.152 15:08:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74684 00:21:25.152 15:08:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:25.152 15:08:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:25.152 killing process with pid 74684 00:21:25.152 15:08:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74684' 00:21:25.152 15:08:40 -- common/autotest_common.sh@955 -- # kill 74684 00:21:25.152 15:08:40 -- common/autotest_common.sh@960 -- # wait 74684 00:21:25.152 15:08:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:25.152 15:08:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:25.152 15:08:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:25.152 15:08:40 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.152 15:08:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.152 15:08:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.152 15:08:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.152 15:08:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.152 15:08:40 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:25.152 00:21:25.152 real 0m24.784s 00:21:25.152 user 0m40.222s 00:21:25.152 sys 0m7.563s 00:21:25.152 15:08:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:25.152 ************************************ 00:21:25.152 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:21:25.152 END TEST nvmf_zcopy 00:21:25.152 ************************************ 00:21:25.152 15:08:40 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:25.152 15:08:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:25.152 15:08:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:25.152 15:08:40 -- common/autotest_common.sh@10 -- # set +x 00:21:25.152 ************************************ 00:21:25.152 START TEST nvmf_nmic 00:21:25.152 ************************************ 00:21:25.152 15:08:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:21:25.411 * Looking for test storage... 00:21:25.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:25.411 15:08:40 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:25.411 15:08:40 -- nvmf/common.sh@7 -- # uname -s 00:21:25.411 15:08:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.411 15:08:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.411 15:08:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.411 15:08:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.411 15:08:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.411 15:08:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.411 15:08:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.411 15:08:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.411 15:08:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.411 15:08:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.411 15:08:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:21:25.411 15:08:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:21:25.411 15:08:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.411 15:08:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.411 15:08:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:25.411 15:08:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.411 15:08:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:25.411 15:08:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.411 15:08:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.411 15:08:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.411 15:08:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.411 15:08:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.412 15:08:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.412 15:08:41 -- paths/export.sh@5 -- # export PATH 00:21:25.412 15:08:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.412 15:08:41 -- nvmf/common.sh@47 -- # : 0 00:21:25.412 15:08:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:25.412 15:08:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:25.412 15:08:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.412 15:08:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.412 15:08:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.412 15:08:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:25.412 15:08:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:25.412 15:08:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:25.412 15:08:41 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:25.412 15:08:41 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:25.412 15:08:41 -- target/nmic.sh@14 -- # nvmftestinit 00:21:25.412 15:08:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:25.412 15:08:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.412 15:08:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:25.412 15:08:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:25.412 15:08:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:25.412 15:08:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.412 15:08:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.412 15:08:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.412 15:08:41 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:25.412 15:08:41 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:25.412 15:08:41 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:25.412 15:08:41 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:25.412 15:08:41 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:25.412 15:08:41 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:25.412 15:08:41 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.412 15:08:41 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.412 15:08:41 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:25.412 15:08:41 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:25.412 15:08:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:25.412 15:08:41 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:25.412 15:08:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:25.412 15:08:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.412 15:08:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:25.412 15:08:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:25.412 15:08:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:25.412 15:08:41 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:25.412 15:08:41 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:25.412 15:08:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:25.412 Cannot find device "nvmf_tgt_br" 00:21:25.412 15:08:41 -- nvmf/common.sh@155 -- # true 00:21:25.412 15:08:41 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:25.412 Cannot find device "nvmf_tgt_br2" 00:21:25.412 15:08:41 -- nvmf/common.sh@156 -- # true 00:21:25.412 15:08:41 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:25.412 15:08:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:25.412 Cannot find device "nvmf_tgt_br" 00:21:25.412 15:08:41 -- nvmf/common.sh@158 -- # true 00:21:25.412 15:08:41 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:25.672 Cannot find device "nvmf_tgt_br2" 00:21:25.672 15:08:41 -- nvmf/common.sh@159 -- # true 00:21:25.672 15:08:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:25.672 15:08:41 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:25.672 15:08:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:25.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:25.672 15:08:41 -- nvmf/common.sh@162 -- # true 00:21:25.672 15:08:41 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:25.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:25.672 15:08:41 -- nvmf/common.sh@163 -- # true 00:21:25.672 15:08:41 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:25.672 15:08:41 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:25.672 15:08:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:25.672 15:08:41 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:25.672 15:08:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:25.672 15:08:41 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:25.672 15:08:41 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:25.672 15:08:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:25.672 15:08:41 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:25.672 15:08:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:25.672 15:08:41 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:25.672 15:08:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:25.672 15:08:41 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:25.672 15:08:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:25.672 15:08:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:25.672 15:08:41 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:25.672 15:08:41 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:25.672 15:08:41 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:25.672 15:08:41 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:25.672 15:08:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:25.672 15:08:41 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:25.931 15:08:41 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:25.931 15:08:41 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:25.931 15:08:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:25.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:21:25.931 00:21:25.931 --- 10.0.0.2 ping statistics --- 00:21:25.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.931 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:21:25.931 15:08:41 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:25.931 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:25.931 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:21:25.931 00:21:25.932 --- 10.0.0.3 ping statistics --- 00:21:25.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.932 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:25.932 15:08:41 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:25.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:21:25.932 00:21:25.932 --- 10.0.0.1 ping statistics --- 00:21:25.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.932 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:25.932 15:08:41 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.932 15:08:41 -- nvmf/common.sh@422 -- # return 0 00:21:25.932 15:08:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:25.932 15:08:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.932 15:08:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:25.932 15:08:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:25.932 15:08:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.932 15:08:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:25.932 15:08:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:25.932 15:08:41 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:21:25.932 15:08:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:25.932 15:08:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:25.932 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:21:25.932 15:08:41 -- nvmf/common.sh@470 -- # nvmfpid=75184 00:21:25.932 15:08:41 -- nvmf/common.sh@471 -- # waitforlisten 75184 00:21:25.932 15:08:41 -- common/autotest_common.sh@817 -- # '[' -z 75184 ']' 00:21:25.932 15:08:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.932 15:08:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:25.932 15:08:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:25.932 15:08:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.932 15:08:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:25.932 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:21:25.932 [2024-04-18 15:08:41.494352] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:21:25.932 [2024-04-18 15:08:41.494979] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.192 [2024-04-18 15:08:41.637737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:26.192 [2024-04-18 15:08:41.735212] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.192 [2024-04-18 15:08:41.735493] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.192 [2024-04-18 15:08:41.735607] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.192 [2024-04-18 15:08:41.735691] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.192 [2024-04-18 15:08:41.735760] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.192 [2024-04-18 15:08:41.735956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.192 [2024-04-18 15:08:41.736091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.192 [2024-04-18 15:08:41.736951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.192 [2024-04-18 15:08:41.736952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.761 15:08:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:26.761 15:08:42 -- common/autotest_common.sh@850 -- # return 0 00:21:26.761 15:08:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:26.761 15:08:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:26.761 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:21:26.761 15:08:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.761 15:08:42 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:26.761 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.761 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:21:26.761 [2024-04-18 15:08:42.427721] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.761 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.761 15:08:42 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:26.761 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.761 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:21:27.020 Malloc0 00:21:27.020 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.020 15:08:42 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:27.020 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.020 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:21:27.020 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.020 15:08:42 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:27.020 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.020 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:21:27.020 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.020 15:08:42 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.020 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.020 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:21:27.020 [2024-04-18 15:08:42.502521] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.020 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.020 test case1: single bdev can't be used in multiple subsystems 00:21:27.020 15:08:42 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:21:27.020 15:08:42 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:27.020 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.020 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:21:27.020 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.020 15:08:42 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:27.020 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.020 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:21:27.020 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.020 15:08:42 -- target/nmic.sh@28 -- # nmic_status=0 00:21:27.020 15:08:42 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:21:27.020 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.020 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:21:27.020 [2024-04-18 15:08:42.526327] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:21:27.020 [2024-04-18 15:08:42.526369] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:21:27.020 [2024-04-18 15:08:42.526387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:27.020 2024/04/18 15:08:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:27.020 request: 00:21:27.020 { 00:21:27.020 "method": "nvmf_subsystem_add_ns", 00:21:27.020 "params": { 00:21:27.020 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:21:27.020 "namespace": { 00:21:27.020 "bdev_name": "Malloc0", 00:21:27.020 "no_auto_visible": false 00:21:27.020 } 00:21:27.020 } 00:21:27.020 } 00:21:27.020 Got JSON-RPC error response 00:21:27.020 GoRPCClient: error on JSON-RPC call 00:21:27.020 15:08:42 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:27.020 15:08:42 -- target/nmic.sh@29 -- # nmic_status=1 00:21:27.020 15:08:42 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:21:27.020 Adding namespace failed - expected result. 00:21:27.020 15:08:42 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:21:27.020 test case2: host connect to nvmf target in multiple paths 00:21:27.020 15:08:42 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:21:27.020 15:08:42 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:27.020 15:08:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.020 15:08:42 -- common/autotest_common.sh@10 -- # set +x 00:21:27.020 [2024-04-18 15:08:42.542453] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:27.020 15:08:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.020 15:08:42 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:27.020 15:08:42 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:21:27.280 15:08:42 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:21:27.280 15:08:42 -- common/autotest_common.sh@1184 -- # local i=0 00:21:27.280 15:08:42 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:27.280 15:08:42 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:27.280 15:08:42 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:29.815 15:08:44 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:29.815 15:08:44 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:29.815 15:08:44 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:29.815 15:08:44 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:29.815 15:08:44 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:29.815 15:08:44 -- common/autotest_common.sh@1194 -- # return 0 00:21:29.815 15:08:44 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:29.815 [global] 00:21:29.815 thread=1 00:21:29.815 invalidate=1 00:21:29.815 rw=write 00:21:29.815 time_based=1 00:21:29.815 runtime=1 00:21:29.815 ioengine=libaio 00:21:29.815 direct=1 00:21:29.815 bs=4096 00:21:29.815 iodepth=1 00:21:29.815 norandommap=0 00:21:29.815 numjobs=1 00:21:29.815 00:21:29.815 verify_dump=1 00:21:29.815 verify_backlog=512 00:21:29.815 verify_state_save=0 00:21:29.815 do_verify=1 00:21:29.815 verify=crc32c-intel 00:21:29.815 [job0] 00:21:29.815 filename=/dev/nvme0n1 00:21:29.815 Could not set queue depth (nvme0n1) 00:21:29.815 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:29.815 fio-3.35 00:21:29.815 Starting 1 thread 00:21:30.753 00:21:30.753 job0: (groupid=0, jobs=1): err= 0: pid=75298: Thu Apr 18 15:08:46 2024 00:21:30.753 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:21:30.753 slat (nsec): min=8140, max=20638, avg=8700.81, stdev=805.59 00:21:30.753 clat (usec): min=90, max=360, avg=106.46, stdev=10.23 00:21:30.753 lat (usec): min=99, max=369, avg=115.16, stdev=10.32 00:21:30.753 clat percentiles (usec): 00:21:30.753 | 1.00th=[ 96], 5.00th=[ 98], 10.00th=[ 99], 20.00th=[ 101], 00:21:30.753 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 105], 60.00th=[ 108], 00:21:30.753 | 70.00th=[ 109], 80.00th=[ 111], 90.00th=[ 115], 95.00th=[ 119], 00:21:30.753 | 99.00th=[ 133], 99.50th=[ 147], 99.90th=[ 277], 99.95th=[ 310], 00:21:30.753 | 99.99th=[ 363] 00:21:30.753 write: IOPS=5105, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:21:30.753 slat (usec): min=12, max=118, avg=14.41, stdev= 5.05 00:21:30.753 clat (usec): min=62, max=304, avg=75.73, stdev= 9.02 00:21:30.753 lat (usec): min=75, max=318, avg=90.14, stdev=11.59 00:21:30.753 clat percentiles (usec): 00:21:30.753 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 72], 00:21:30.753 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 76], 00:21:30.753 | 70.00th=[ 78], 80.00th=[ 80], 90.00th=[ 83], 95.00th=[ 87], 00:21:30.753 | 99.00th=[ 98], 99.50th=[ 123], 99.90th=[ 174], 99.95th=[ 235], 00:21:30.753 | 99.99th=[ 306] 00:21:30.753 bw ( KiB/s): min=20439, max=20439, per=100.00%, avg=20439.00, stdev= 0.00, samples=1 00:21:30.753 iops : min= 5109, max= 5109, avg=5109.00, stdev= 0.00, samples=1 00:21:30.753 lat (usec) : 100=58.66%, 250=41.27%, 500=0.07% 00:21:30.753 cpu : usr=2.50%, sys=8.00%, ctx=9719, majf=0, minf=2 00:21:30.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:30.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.753 issued rwts: total=4608,5111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:30.753 00:21:30.753 Run status group 0 (all jobs): 00:21:30.753 READ: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=18.0MiB (18.9MB), run=1001-1001msec 00:21:30.753 WRITE: bw=19.9MiB/s (20.9MB/s), 19.9MiB/s-19.9MiB/s (20.9MB/s-20.9MB/s), io=20.0MiB (20.9MB), run=1001-1001msec 00:21:30.753 00:21:30.753 Disk stats (read/write): 00:21:30.753 nvme0n1: ios=4161/4608, merge=0/0, ticks=442/382, in_queue=824, util=91.08% 00:21:30.753 15:08:46 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:30.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:21:30.754 15:08:46 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:30.754 15:08:46 -- common/autotest_common.sh@1205 -- # local i=0 00:21:30.754 15:08:46 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:30.754 15:08:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:30.754 15:08:46 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:30.754 15:08:46 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:30.754 15:08:46 -- common/autotest_common.sh@1217 -- # return 0 00:21:30.754 15:08:46 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:30.754 15:08:46 -- target/nmic.sh@53 -- # nvmftestfini 00:21:30.754 15:08:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:30.754 15:08:46 -- nvmf/common.sh@117 -- # sync 00:21:30.754 15:08:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:30.754 15:08:46 -- nvmf/common.sh@120 -- # set +e 00:21:30.754 15:08:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:30.754 15:08:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:31.013 rmmod nvme_tcp 00:21:31.013 rmmod nvme_fabrics 00:21:31.013 rmmod nvme_keyring 00:21:31.013 15:08:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:31.013 15:08:46 -- nvmf/common.sh@124 -- # set -e 00:21:31.013 15:08:46 -- nvmf/common.sh@125 -- # return 0 00:21:31.013 15:08:46 -- nvmf/common.sh@478 -- # '[' -n 75184 ']' 00:21:31.013 15:08:46 -- nvmf/common.sh@479 -- # killprocess 75184 00:21:31.013 15:08:46 -- common/autotest_common.sh@936 -- # '[' -z 75184 ']' 00:21:31.013 15:08:46 -- common/autotest_common.sh@940 -- # kill -0 75184 00:21:31.013 15:08:46 -- common/autotest_common.sh@941 -- # uname 00:21:31.013 15:08:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:31.013 15:08:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75184 00:21:31.013 15:08:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:31.013 15:08:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:31.013 killing process with pid 75184 00:21:31.013 15:08:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75184' 00:21:31.013 15:08:46 -- common/autotest_common.sh@955 -- # kill 75184 00:21:31.013 15:08:46 -- common/autotest_common.sh@960 -- # wait 75184 00:21:31.271 15:08:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:31.271 15:08:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:31.271 15:08:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:31.271 15:08:46 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:31.271 15:08:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:31.271 15:08:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.271 15:08:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.271 15:08:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.271 15:08:46 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:31.271 00:21:31.271 real 0m5.979s 00:21:31.271 user 0m19.525s 00:21:31.271 sys 0m1.593s 00:21:31.271 15:08:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:31.271 15:08:46 -- common/autotest_common.sh@10 -- # set +x 00:21:31.271 ************************************ 00:21:31.271 END TEST nvmf_nmic 00:21:31.271 ************************************ 00:21:31.271 15:08:46 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:31.271 15:08:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:31.271 15:08:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:31.271 15:08:46 -- common/autotest_common.sh@10 -- # set +x 00:21:31.530 ************************************ 00:21:31.530 START TEST nvmf_fio_target 00:21:31.530 ************************************ 00:21:31.530 15:08:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:21:31.530 * Looking for test storage... 00:21:31.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:31.530 15:08:47 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:31.530 15:08:47 -- nvmf/common.sh@7 -- # uname -s 00:21:31.530 15:08:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.530 15:08:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.530 15:08:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.530 15:08:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.530 15:08:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.530 15:08:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.530 15:08:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.530 15:08:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.530 15:08:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.530 15:08:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.530 15:08:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:21:31.530 15:08:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:21:31.530 15:08:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.530 15:08:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.530 15:08:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:31.530 15:08:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.530 15:08:47 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:31.530 15:08:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.530 15:08:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.530 15:08:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.530 15:08:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.530 15:08:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.530 15:08:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.530 15:08:47 -- paths/export.sh@5 -- # export PATH 00:21:31.530 15:08:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.530 15:08:47 -- nvmf/common.sh@47 -- # : 0 00:21:31.530 15:08:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:31.530 15:08:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:31.530 15:08:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.530 15:08:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.530 15:08:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.531 15:08:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:31.531 15:08:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:31.531 15:08:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:31.531 15:08:47 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:31.531 15:08:47 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:31.531 15:08:47 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:31.531 15:08:47 -- target/fio.sh@16 -- # nvmftestinit 00:21:31.531 15:08:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:31.531 15:08:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.531 15:08:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:31.531 15:08:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:31.531 15:08:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:31.531 15:08:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.531 15:08:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.531 15:08:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.531 15:08:47 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:31.531 15:08:47 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:31.531 15:08:47 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:31.531 15:08:47 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:31.531 15:08:47 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:31.531 15:08:47 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:31.531 15:08:47 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.531 15:08:47 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.531 15:08:47 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:31.531 15:08:47 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:31.531 15:08:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:31.531 15:08:47 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:31.531 15:08:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:31.531 15:08:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.531 15:08:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:31.531 15:08:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:31.531 15:08:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:31.531 15:08:47 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:31.531 15:08:47 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:31.531 15:08:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:31.531 Cannot find device "nvmf_tgt_br" 00:21:31.531 15:08:47 -- nvmf/common.sh@155 -- # true 00:21:31.531 15:08:47 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:31.531 Cannot find device "nvmf_tgt_br2" 00:21:31.531 15:08:47 -- nvmf/common.sh@156 -- # true 00:21:31.531 15:08:47 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:31.531 15:08:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:31.790 Cannot find device "nvmf_tgt_br" 00:21:31.790 15:08:47 -- nvmf/common.sh@158 -- # true 00:21:31.790 15:08:47 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:31.790 Cannot find device "nvmf_tgt_br2" 00:21:31.790 15:08:47 -- nvmf/common.sh@159 -- # true 00:21:31.790 15:08:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:31.790 15:08:47 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:31.790 15:08:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:31.790 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.790 15:08:47 -- nvmf/common.sh@162 -- # true 00:21:31.790 15:08:47 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:31.790 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.790 15:08:47 -- nvmf/common.sh@163 -- # true 00:21:31.790 15:08:47 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:31.790 15:08:47 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:31.790 15:08:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:31.790 15:08:47 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:31.790 15:08:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:31.790 15:08:47 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:31.791 15:08:47 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:31.791 15:08:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:31.791 15:08:47 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:31.791 15:08:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:31.791 15:08:47 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:31.791 15:08:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:31.791 15:08:47 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:31.791 15:08:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:31.791 15:08:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:31.791 15:08:47 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:31.791 15:08:47 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:32.049 15:08:47 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:32.050 15:08:47 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:32.050 15:08:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:32.050 15:08:47 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:32.050 15:08:47 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:32.050 15:08:47 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:32.050 15:08:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:32.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:21:32.050 00:21:32.050 --- 10.0.0.2 ping statistics --- 00:21:32.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.050 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:21:32.050 15:08:47 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:32.050 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:32.050 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:21:32.050 00:21:32.050 --- 10.0.0.3 ping statistics --- 00:21:32.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.050 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:32.050 15:08:47 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:32.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:21:32.050 00:21:32.050 --- 10.0.0.1 ping statistics --- 00:21:32.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.050 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:21:32.050 15:08:47 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.050 15:08:47 -- nvmf/common.sh@422 -- # return 0 00:21:32.050 15:08:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:32.050 15:08:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.050 15:08:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:32.050 15:08:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:32.050 15:08:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.050 15:08:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:32.050 15:08:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:32.050 15:08:47 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:21:32.050 15:08:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:32.050 15:08:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:32.050 15:08:47 -- common/autotest_common.sh@10 -- # set +x 00:21:32.050 15:08:47 -- nvmf/common.sh@470 -- # nvmfpid=75486 00:21:32.050 15:08:47 -- nvmf/common.sh@471 -- # waitforlisten 75486 00:21:32.050 15:08:47 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:32.050 15:08:47 -- common/autotest_common.sh@817 -- # '[' -z 75486 ']' 00:21:32.050 15:08:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.050 15:08:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:32.050 15:08:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.050 15:08:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:32.050 15:08:47 -- common/autotest_common.sh@10 -- # set +x 00:21:32.050 [2024-04-18 15:08:47.668118] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:21:32.050 [2024-04-18 15:08:47.668732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.309 [2024-04-18 15:08:47.811209] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:32.309 [2024-04-18 15:08:47.899357] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.309 [2024-04-18 15:08:47.899421] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.309 [2024-04-18 15:08:47.899431] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.309 [2024-04-18 15:08:47.899440] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.309 [2024-04-18 15:08:47.899447] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.309 [2024-04-18 15:08:47.899669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.309 [2024-04-18 15:08:47.899866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.309 [2024-04-18 15:08:47.901408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.309 [2024-04-18 15:08:47.901409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:32.879 15:08:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:32.879 15:08:48 -- common/autotest_common.sh@850 -- # return 0 00:21:32.879 15:08:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:32.879 15:08:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:32.879 15:08:48 -- common/autotest_common.sh@10 -- # set +x 00:21:32.879 15:08:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.879 15:08:48 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:33.138 [2024-04-18 15:08:48.745546] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.138 15:08:48 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:33.396 15:08:48 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:21:33.396 15:08:48 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:33.655 15:08:49 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:21:33.655 15:08:49 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:33.913 15:08:49 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:21:33.913 15:08:49 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:34.172 15:08:49 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:21:34.172 15:08:49 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:21:34.172 15:08:49 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:34.430 15:08:50 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:21:34.430 15:08:50 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:34.689 15:08:50 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:21:34.689 15:08:50 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:34.949 15:08:50 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:21:34.949 15:08:50 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:21:35.208 15:08:50 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:35.467 15:08:50 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:35.467 15:08:50 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:35.467 15:08:51 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:21:35.467 15:08:51 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:35.726 15:08:51 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:35.986 [2024-04-18 15:08:51.581019] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.986 15:08:51 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:21:36.246 15:08:51 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:21:36.505 15:08:52 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:36.764 15:08:52 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:21:36.764 15:08:52 -- common/autotest_common.sh@1184 -- # local i=0 00:21:36.764 15:08:52 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:36.764 15:08:52 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:21:36.764 15:08:52 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:21:36.764 15:08:52 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:38.669 15:08:54 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:38.669 15:08:54 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:38.669 15:08:54 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:38.669 15:08:54 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:21:38.669 15:08:54 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:38.669 15:08:54 -- common/autotest_common.sh@1194 -- # return 0 00:21:38.669 15:08:54 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:21:38.669 [global] 00:21:38.669 thread=1 00:21:38.669 invalidate=1 00:21:38.669 rw=write 00:21:38.669 time_based=1 00:21:38.669 runtime=1 00:21:38.669 ioengine=libaio 00:21:38.669 direct=1 00:21:38.669 bs=4096 00:21:38.669 iodepth=1 00:21:38.669 norandommap=0 00:21:38.669 numjobs=1 00:21:38.669 00:21:38.669 verify_dump=1 00:21:38.669 verify_backlog=512 00:21:38.669 verify_state_save=0 00:21:38.669 do_verify=1 00:21:38.669 verify=crc32c-intel 00:21:38.669 [job0] 00:21:38.669 filename=/dev/nvme0n1 00:21:38.929 [job1] 00:21:38.929 filename=/dev/nvme0n2 00:21:38.929 [job2] 00:21:38.929 filename=/dev/nvme0n3 00:21:38.929 [job3] 00:21:38.929 filename=/dev/nvme0n4 00:21:38.929 Could not set queue depth (nvme0n1) 00:21:38.929 Could not set queue depth (nvme0n2) 00:21:38.929 Could not set queue depth (nvme0n3) 00:21:38.929 Could not set queue depth (nvme0n4) 00:21:38.929 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:38.929 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:38.929 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:38.929 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:38.929 fio-3.35 00:21:38.929 Starting 4 threads 00:21:40.332 00:21:40.332 job0: (groupid=0, jobs=1): err= 0: pid=75773: Thu Apr 18 15:08:55 2024 00:21:40.332 read: IOPS=1786, BW=7145KiB/s (7316kB/s)(7152KiB/1001msec) 00:21:40.332 slat (nsec): min=5985, max=42473, avg=10758.66, stdev=3746.50 00:21:40.332 clat (usec): min=143, max=7816, avg=282.34, stdev=208.02 00:21:40.332 lat (usec): min=151, max=7822, avg=293.10, stdev=208.17 00:21:40.332 clat percentiles (usec): 00:21:40.332 | 1.00th=[ 163], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 241], 00:21:40.332 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 273], 60.00th=[ 285], 00:21:40.332 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 334], 00:21:40.332 | 99.00th=[ 408], 99.50th=[ 457], 99.90th=[ 3654], 99.95th=[ 7832], 00:21:40.332 | 99.99th=[ 7832] 00:21:40.332 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:21:40.332 slat (usec): min=7, max=232, avg=15.73, stdev= 7.62 00:21:40.332 clat (usec): min=89, max=364, avg=214.47, stdev=28.68 00:21:40.332 lat (usec): min=103, max=564, avg=230.20, stdev=31.13 00:21:40.332 clat percentiles (usec): 00:21:40.332 | 1.00th=[ 143], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:21:40.332 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 212], 60.00th=[ 225], 00:21:40.332 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 258], 00:21:40.332 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 338], 99.95th=[ 343], 00:21:40.332 | 99.99th=[ 363] 00:21:40.332 bw ( KiB/s): min= 8192, max= 8192, per=22.24%, avg=8192.00, stdev= 0.00, samples=1 00:21:40.332 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:40.332 lat (usec) : 100=0.34%, 250=63.03%, 500=36.47%, 750=0.03%, 1000=0.05% 00:21:40.332 lat (msec) : 4=0.05%, 10=0.03% 00:21:40.332 cpu : usr=0.90%, sys=4.20%, ctx=3836, majf=0, minf=15 00:21:40.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:40.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.332 issued rwts: total=1788,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:40.332 job1: (groupid=0, jobs=1): err= 0: pid=75774: Thu Apr 18 15:08:55 2024 00:21:40.332 read: IOPS=2832, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:21:40.332 slat (nsec): min=9716, max=40908, avg=13508.67, stdev=3787.94 00:21:40.332 clat (usec): min=111, max=343, avg=170.80, stdev=48.68 00:21:40.332 lat (usec): min=124, max=360, avg=184.31, stdev=47.53 00:21:40.332 clat percentiles (usec): 00:21:40.332 | 1.00th=[ 126], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 135], 00:21:40.332 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:21:40.332 | 70.00th=[ 163], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 258], 00:21:40.332 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 318], 99.95th=[ 343], 00:21:40.332 | 99.99th=[ 343] 00:21:40.332 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:21:40.332 slat (usec): min=11, max=192, avg=20.42, stdev= 7.72 00:21:40.332 clat (usec): min=76, max=341, avg=132.45, stdev=42.37 00:21:40.332 lat (usec): min=96, max=403, avg=152.88, stdev=42.03 00:21:40.332 clat percentiles (usec): 00:21:40.332 | 1.00th=[ 86], 5.00th=[ 92], 10.00th=[ 95], 20.00th=[ 98], 00:21:40.332 | 30.00th=[ 102], 40.00th=[ 105], 50.00th=[ 110], 60.00th=[ 117], 00:21:40.332 | 70.00th=[ 176], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 204], 00:21:40.332 | 99.00th=[ 219], 99.50th=[ 225], 99.90th=[ 241], 99.95th=[ 245], 00:21:40.332 | 99.99th=[ 343] 00:21:40.332 bw ( KiB/s): min=16232, max=16232, per=44.08%, avg=16232.00, stdev= 0.00, samples=1 00:21:40.332 iops : min= 4058, max= 4058, avg=4058.00, stdev= 0.00, samples=1 00:21:40.332 lat (usec) : 100=13.32%, 250=82.28%, 500=4.40% 00:21:40.332 cpu : usr=1.60%, sys=7.80%, ctx=5907, majf=0, minf=12 00:21:40.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:40.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.332 issued rwts: total=2835,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:40.332 job2: (groupid=0, jobs=1): err= 0: pid=75775: Thu Apr 18 15:08:55 2024 00:21:40.332 read: IOPS=1794, BW=7177KiB/s (7349kB/s)(7184KiB/1001msec) 00:21:40.332 slat (nsec): min=8217, max=61659, avg=12366.08, stdev=7330.53 00:21:40.332 clat (usec): min=125, max=666, avg=278.03, stdev=54.51 00:21:40.332 lat (usec): min=134, max=695, avg=290.39, stdev=57.32 00:21:40.332 clat percentiles (usec): 00:21:40.332 | 1.00th=[ 137], 5.00th=[ 212], 10.00th=[ 235], 20.00th=[ 243], 00:21:40.332 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 281], 60.00th=[ 289], 00:21:40.332 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 388], 00:21:40.332 | 99.00th=[ 437], 99.50th=[ 461], 99.90th=[ 660], 99.95th=[ 668], 00:21:40.332 | 99.99th=[ 668] 00:21:40.332 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:21:40.332 slat (usec): min=12, max=197, avg=20.74, stdev= 9.72 00:21:40.332 clat (usec): min=115, max=308, avg=210.35, stdev=22.02 00:21:40.332 lat (usec): min=141, max=482, avg=231.10, stdev=28.06 00:21:40.332 clat percentiles (usec): 00:21:40.332 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 190], 00:21:40.332 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 217], 00:21:40.332 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 249], 00:21:40.332 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 293], 99.95th=[ 293], 00:21:40.332 | 99.99th=[ 310] 00:21:40.332 bw ( KiB/s): min= 8192, max= 8192, per=22.24%, avg=8192.00, stdev= 0.00, samples=1 00:21:40.332 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:40.332 lat (usec) : 250=65.74%, 500=34.18%, 750=0.08% 00:21:40.332 cpu : usr=1.20%, sys=4.90%, ctx=3844, majf=0, minf=5 00:21:40.332 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:40.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.332 issued rwts: total=1796,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.332 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:40.332 job3: (groupid=0, jobs=1): err= 0: pid=75776: Thu Apr 18 15:08:55 2024 00:21:40.333 read: IOPS=1784, BW=7137KiB/s (7308kB/s)(7144KiB/1001msec) 00:21:40.333 slat (nsec): min=6241, max=96776, avg=11081.71, stdev=3427.36 00:21:40.333 clat (usec): min=113, max=7866, avg=282.19, stdev=209.80 00:21:40.333 lat (usec): min=130, max=7875, avg=293.27, stdev=209.61 00:21:40.333 clat percentiles (usec): 00:21:40.333 | 1.00th=[ 172], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 239], 00:21:40.333 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 273], 60.00th=[ 289], 00:21:40.333 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 334], 00:21:40.333 | 99.00th=[ 408], 99.50th=[ 482], 99.90th=[ 3687], 99.95th=[ 7898], 00:21:40.333 | 99.99th=[ 7898] 00:21:40.333 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:21:40.333 slat (nsec): min=10803, max=68188, avg=17756.87, stdev=5304.74 00:21:40.333 clat (usec): min=103, max=344, avg=212.54, stdev=29.99 00:21:40.333 lat (usec): min=118, max=359, avg=230.30, stdev=29.93 00:21:40.333 clat percentiles (usec): 00:21:40.333 | 1.00th=[ 126], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 186], 00:21:40.333 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 210], 60.00th=[ 223], 00:21:40.333 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 258], 00:21:40.333 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 318], 99.95th=[ 338], 00:21:40.333 | 99.99th=[ 347] 00:21:40.333 bw ( KiB/s): min= 8192, max= 8192, per=22.24%, avg=8192.00, stdev= 0.00, samples=1 00:21:40.333 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:21:40.333 lat (usec) : 250=64.29%, 500=35.52%, 750=0.05%, 1000=0.05% 00:21:40.333 lat (msec) : 4=0.05%, 10=0.03% 00:21:40.333 cpu : usr=0.90%, sys=4.60%, ctx=3836, majf=0, minf=3 00:21:40.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:40.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.333 issued rwts: total=1786,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.333 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:40.333 00:21:40.333 Run status group 0 (all jobs): 00:21:40.333 READ: bw=32.0MiB/s (33.6MB/s), 7137KiB/s-11.1MiB/s (7308kB/s-11.6MB/s), io=32.1MiB (33.6MB), run=1001-1001msec 00:21:40.333 WRITE: bw=36.0MiB/s (37.7MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=36.0MiB (37.7MB), run=1001-1001msec 00:21:40.333 00:21:40.333 Disk stats (read/write): 00:21:40.333 nvme0n1: ios=1586/1736, merge=0/0, ticks=457/382, in_queue=839, util=88.06% 00:21:40.333 nvme0n2: ios=2609/2781, merge=0/0, ticks=450/360, in_queue=810, util=89.59% 00:21:40.333 nvme0n3: ios=1541/1751, merge=0/0, ticks=454/383, in_queue=837, util=89.61% 00:21:40.333 nvme0n4: ios=1536/1721, merge=0/0, ticks=422/377, in_queue=799, util=89.56% 00:21:40.333 15:08:55 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:21:40.333 [global] 00:21:40.333 thread=1 00:21:40.333 invalidate=1 00:21:40.333 rw=randwrite 00:21:40.333 time_based=1 00:21:40.333 runtime=1 00:21:40.333 ioengine=libaio 00:21:40.333 direct=1 00:21:40.333 bs=4096 00:21:40.333 iodepth=1 00:21:40.333 norandommap=0 00:21:40.333 numjobs=1 00:21:40.333 00:21:40.333 verify_dump=1 00:21:40.333 verify_backlog=512 00:21:40.333 verify_state_save=0 00:21:40.333 do_verify=1 00:21:40.333 verify=crc32c-intel 00:21:40.333 [job0] 00:21:40.333 filename=/dev/nvme0n1 00:21:40.333 [job1] 00:21:40.333 filename=/dev/nvme0n2 00:21:40.333 [job2] 00:21:40.333 filename=/dev/nvme0n3 00:21:40.333 [job3] 00:21:40.333 filename=/dev/nvme0n4 00:21:40.333 Could not set queue depth (nvme0n1) 00:21:40.333 Could not set queue depth (nvme0n2) 00:21:40.333 Could not set queue depth (nvme0n3) 00:21:40.333 Could not set queue depth (nvme0n4) 00:21:40.333 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:40.333 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:40.333 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:40.333 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:40.333 fio-3.35 00:21:40.333 Starting 4 threads 00:21:41.713 00:21:41.713 job0: (groupid=0, jobs=1): err= 0: pid=75835: Thu Apr 18 15:08:57 2024 00:21:41.713 read: IOPS=2488, BW=9954KiB/s (10.2MB/s)(9964KiB/1001msec) 00:21:41.713 slat (nsec): min=5988, max=33966, avg=7907.51, stdev=1845.00 00:21:41.713 clat (usec): min=119, max=847, avg=214.82, stdev=45.21 00:21:41.713 lat (usec): min=127, max=863, avg=222.73, stdev=45.20 00:21:41.713 clat percentiles (usec): 00:21:41.713 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 161], 00:21:41.713 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:21:41.713 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 265], 00:21:41.713 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 424], 99.95th=[ 627], 00:21:41.713 | 99.99th=[ 848] 00:21:41.713 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:21:41.713 slat (usec): min=7, max=140, avg=15.17, stdev= 6.91 00:21:41.713 clat (usec): min=76, max=5834, avg=156.97, stdev=166.81 00:21:41.713 lat (usec): min=88, max=5855, avg=172.14, stdev=166.74 00:21:41.713 clat percentiles (usec): 00:21:41.713 | 1.00th=[ 92], 5.00th=[ 97], 10.00th=[ 101], 20.00th=[ 108], 00:21:41.713 | 30.00th=[ 115], 40.00th=[ 133], 50.00th=[ 157], 60.00th=[ 169], 00:21:41.713 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 210], 00:21:41.713 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 3392], 99.95th=[ 3621], 00:21:41.713 | 99.99th=[ 5866] 00:21:41.713 bw ( KiB/s): min=12288, max=12288, per=31.79%, avg=12288.00, stdev= 0.00, samples=1 00:21:41.713 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:21:41.713 lat (usec) : 100=4.34%, 250=89.23%, 500=6.30%, 750=0.02%, 1000=0.02% 00:21:41.713 lat (msec) : 2=0.02%, 4=0.06%, 10=0.02% 00:21:41.713 cpu : usr=1.10%, sys=4.70%, ctx=5054, majf=0, minf=3 00:21:41.713 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:41.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.714 issued rwts: total=2491,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:41.714 job1: (groupid=0, jobs=1): err= 0: pid=75836: Thu Apr 18 15:08:57 2024 00:21:41.714 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:21:41.714 slat (nsec): min=7644, max=58191, avg=9096.97, stdev=2385.98 00:21:41.714 clat (usec): min=140, max=643, avg=251.12, stdev=28.60 00:21:41.714 lat (usec): min=148, max=652, avg=260.22, stdev=29.02 00:21:41.714 clat percentiles (usec): 00:21:41.714 | 1.00th=[ 208], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:21:41.714 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:21:41.714 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:21:41.714 | 99.00th=[ 363], 99.50th=[ 429], 99.90th=[ 545], 99.95th=[ 586], 00:21:41.714 | 99.99th=[ 644] 00:21:41.714 write: IOPS=2280, BW=9123KiB/s (9342kB/s)(9132KiB/1001msec); 0 zone resets 00:21:41.714 slat (usec): min=12, max=111, avg=15.34, stdev= 6.10 00:21:41.714 clat (usec): min=82, max=586, avg=187.31, stdev=26.42 00:21:41.714 lat (usec): min=95, max=622, avg=202.65, stdev=27.10 00:21:41.714 clat percentiles (usec): 00:21:41.714 | 1.00th=[ 101], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:21:41.714 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:21:41.714 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 217], 00:21:41.714 | 99.00th=[ 239], 99.50th=[ 306], 99.90th=[ 529], 99.95th=[ 537], 00:21:41.714 | 99.99th=[ 586] 00:21:41.714 bw ( KiB/s): min= 9048, max= 9048, per=23.41%, avg=9048.00, stdev= 0.00, samples=1 00:21:41.714 iops : min= 2262, max= 2262, avg=2262.00, stdev= 0.00, samples=1 00:21:41.714 lat (usec) : 100=0.48%, 250=79.91%, 500=19.44%, 750=0.16% 00:21:41.714 cpu : usr=1.10%, sys=4.00%, ctx=4343, majf=0, minf=19 00:21:41.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:41.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.714 issued rwts: total=2048,2283,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:41.714 job2: (groupid=0, jobs=1): err= 0: pid=75837: Thu Apr 18 15:08:57 2024 00:21:41.714 read: IOPS=2510, BW=9.81MiB/s (10.3MB/s)(9.82MiB/1001msec) 00:21:41.714 slat (nsec): min=6882, max=33441, avg=8828.22, stdev=1586.22 00:21:41.714 clat (usec): min=111, max=401, avg=215.57, stdev=38.17 00:21:41.714 lat (usec): min=120, max=410, avg=224.40, stdev=38.28 00:21:41.714 clat percentiles (usec): 00:21:41.714 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 169], 00:21:41.714 | 30.00th=[ 208], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 231], 00:21:41.714 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 265], 00:21:41.714 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 334], 99.95th=[ 334], 00:21:41.714 | 99.99th=[ 400] 00:21:41.714 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:21:41.714 slat (usec): min=7, max=144, avg=14.84, stdev= 6.80 00:21:41.714 clat (usec): min=79, max=1596, avg=153.46, stdev=46.31 00:21:41.714 lat (usec): min=92, max=1609, avg=168.30, stdev=45.86 00:21:41.714 clat percentiles (usec): 00:21:41.714 | 1.00th=[ 97], 5.00th=[ 104], 10.00th=[ 109], 20.00th=[ 116], 00:21:41.714 | 30.00th=[ 123], 40.00th=[ 135], 50.00th=[ 157], 60.00th=[ 169], 00:21:41.714 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 210], 00:21:41.714 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 338], 99.95th=[ 420], 00:21:41.714 | 99.99th=[ 1598] 00:21:41.714 bw ( KiB/s): min=12288, max=12288, per=31.79%, avg=12288.00, stdev= 0.00, samples=1 00:21:41.714 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:21:41.714 lat (usec) : 100=1.18%, 250=92.79%, 500=6.01% 00:21:41.714 lat (msec) : 2=0.02% 00:21:41.714 cpu : usr=1.30%, sys=4.70%, ctx=5076, majf=0, minf=13 00:21:41.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:41.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.714 issued rwts: total=2513,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:41.714 job3: (groupid=0, jobs=1): err= 0: pid=75838: Thu Apr 18 15:08:57 2024 00:21:41.714 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:21:41.714 slat (nsec): min=8115, max=27529, avg=9332.86, stdev=1739.79 00:21:41.714 clat (usec): min=160, max=649, avg=250.92, stdev=25.95 00:21:41.714 lat (usec): min=170, max=660, avg=260.25, stdev=26.04 00:21:41.714 clat percentiles (usec): 00:21:41.714 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:21:41.714 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:21:41.714 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 281], 00:21:41.714 | 99.00th=[ 355], 99.50th=[ 408], 99.90th=[ 510], 99.95th=[ 603], 00:21:41.714 | 99.99th=[ 652] 00:21:41.714 write: IOPS=2266, BW=9067KiB/s (9285kB/s)(9076KiB/1001msec); 0 zone resets 00:21:41.714 slat (usec): min=12, max=123, avg=15.68, stdev= 6.37 00:21:41.714 clat (usec): min=96, max=2006, avg=188.11, stdev=44.17 00:21:41.714 lat (usec): min=111, max=2021, avg=203.79, stdev=44.51 00:21:41.714 clat percentiles (usec): 00:21:41.714 | 1.00th=[ 133], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:21:41.714 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:21:41.714 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 217], 00:21:41.714 | 99.00th=[ 237], 99.50th=[ 255], 99.90th=[ 490], 99.95th=[ 619], 00:21:41.714 | 99.99th=[ 2008] 00:21:41.714 bw ( KiB/s): min= 8960, max= 8960, per=23.18%, avg=8960.00, stdev= 0.00, samples=1 00:21:41.714 iops : min= 2240, max= 2240, avg=2240.00, stdev= 0.00, samples=1 00:21:41.714 lat (usec) : 100=0.05%, 250=80.19%, 500=19.64%, 750=0.09% 00:21:41.714 lat (msec) : 4=0.02% 00:21:41.714 cpu : usr=0.90%, sys=4.20%, ctx=4317, majf=0, minf=10 00:21:41.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:41.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.714 issued rwts: total=2048,2269,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:41.714 00:21:41.714 Run status group 0 (all jobs): 00:21:41.714 READ: bw=35.5MiB/s (37.2MB/s), 8184KiB/s-9.81MiB/s (8380kB/s-10.3MB/s), io=35.5MiB (37.3MB), run=1001-1001msec 00:21:41.714 WRITE: bw=37.7MiB/s (39.6MB/s), 9067KiB/s-9.99MiB/s (9285kB/s-10.5MB/s), io=37.8MiB (39.6MB), run=1001-1001msec 00:21:41.714 00:21:41.714 Disk stats (read/write): 00:21:41.714 nvme0n1: ios=2098/2442, merge=0/0, ticks=443/376, in_queue=819, util=87.17% 00:21:41.714 nvme0n2: ios=1762/2048, merge=0/0, ticks=458/402, in_queue=860, util=89.49% 00:21:41.714 nvme0n3: ios=2054/2475, merge=0/0, ticks=429/384, in_queue=813, util=89.41% 00:21:41.714 nvme0n4: ios=1707/2048, merge=0/0, ticks=430/394, in_queue=824, util=89.77% 00:21:41.714 15:08:57 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:21:41.714 [global] 00:21:41.714 thread=1 00:21:41.714 invalidate=1 00:21:41.714 rw=write 00:21:41.714 time_based=1 00:21:41.714 runtime=1 00:21:41.714 ioengine=libaio 00:21:41.714 direct=1 00:21:41.714 bs=4096 00:21:41.714 iodepth=128 00:21:41.714 norandommap=0 00:21:41.714 numjobs=1 00:21:41.714 00:21:41.714 verify_dump=1 00:21:41.714 verify_backlog=512 00:21:41.714 verify_state_save=0 00:21:41.714 do_verify=1 00:21:41.714 verify=crc32c-intel 00:21:41.714 [job0] 00:21:41.714 filename=/dev/nvme0n1 00:21:41.714 [job1] 00:21:41.714 filename=/dev/nvme0n2 00:21:41.714 [job2] 00:21:41.714 filename=/dev/nvme0n3 00:21:41.714 [job3] 00:21:41.714 filename=/dev/nvme0n4 00:21:41.714 Could not set queue depth (nvme0n1) 00:21:41.714 Could not set queue depth (nvme0n2) 00:21:41.714 Could not set queue depth (nvme0n3) 00:21:41.714 Could not set queue depth (nvme0n4) 00:21:41.974 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:41.974 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:41.974 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:41.974 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:41.974 fio-3.35 00:21:41.974 Starting 4 threads 00:21:43.354 00:21:43.354 job0: (groupid=0, jobs=1): err= 0: pid=75895: Thu Apr 18 15:08:58 2024 00:21:43.354 read: IOPS=2298, BW=9192KiB/s (9413kB/s)(9220KiB/1003msec) 00:21:43.354 slat (usec): min=5, max=12171, avg=214.27, stdev=1022.09 00:21:43.354 clat (usec): min=1894, max=47667, avg=27134.39, stdev=6891.51 00:21:43.354 lat (usec): min=6260, max=47687, avg=27348.67, stdev=6872.61 00:21:43.354 clat percentiles (usec): 00:21:43.354 | 1.00th=[ 6456], 5.00th=[19792], 10.00th=[21627], 20.00th=[23462], 00:21:43.354 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25297], 60.00th=[26608], 00:21:43.354 | 70.00th=[28705], 80.00th=[30016], 90.00th=[35914], 95.00th=[43779], 00:21:43.354 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:21:43.354 | 99.99th=[47449] 00:21:43.354 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:21:43.354 slat (usec): min=16, max=11926, avg=186.23, stdev=803.37 00:21:43.354 clat (usec): min=14198, max=40780, avg=24751.68, stdev=5015.06 00:21:43.354 lat (usec): min=14254, max=40815, avg=24937.91, stdev=5002.08 00:21:43.354 clat percentiles (usec): 00:21:43.354 | 1.00th=[16712], 5.00th=[17957], 10.00th=[19006], 20.00th=[20579], 00:21:43.354 | 30.00th=[21365], 40.00th=[23200], 50.00th=[24511], 60.00th=[25035], 00:21:43.354 | 70.00th=[26346], 80.00th=[28443], 90.00th=[33162], 95.00th=[35390], 00:21:43.354 | 99.00th=[40109], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:21:43.354 | 99.99th=[40633] 00:21:43.354 bw ( KiB/s): min= 8208, max=12288, per=16.07%, avg=10248.00, stdev=2885.00, samples=2 00:21:43.354 iops : min= 2052, max= 3072, avg=2562.00, stdev=721.25, samples=2 00:21:43.354 lat (msec) : 2=0.02%, 10=0.66%, 20=9.60%, 50=89.72% 00:21:43.354 cpu : usr=2.89%, sys=10.48%, ctx=243, majf=0, minf=8 00:21:43.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:43.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.354 issued rwts: total=2305,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.354 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.354 job1: (groupid=0, jobs=1): err= 0: pid=75896: Thu Apr 18 15:08:58 2024 00:21:43.354 read: IOPS=5591, BW=21.8MiB/s (22.9MB/s)(21.9MiB/1004msec) 00:21:43.354 slat (usec): min=6, max=4984, avg=83.61, stdev=355.06 00:21:43.354 clat (usec): min=1875, max=18843, avg=11512.43, stdev=1428.46 00:21:43.354 lat (usec): min=3524, max=18863, avg=11596.04, stdev=1433.11 00:21:43.354 clat percentiles (usec): 00:21:43.354 | 1.00th=[ 7308], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10552], 00:21:43.354 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:21:43.354 | 70.00th=[11994], 80.00th=[12387], 90.00th=[13042], 95.00th=[13698], 00:21:43.354 | 99.00th=[15270], 99.50th=[15533], 99.90th=[16057], 99.95th=[16057], 00:21:43.354 | 99.99th=[18744] 00:21:43.354 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:21:43.354 slat (usec): min=11, max=3829, avg=81.77, stdev=272.82 00:21:43.354 clat (usec): min=7130, max=16100, avg=11051.49, stdev=1255.46 00:21:43.354 lat (usec): min=7165, max=16276, avg=11133.26, stdev=1251.53 00:21:43.354 clat percentiles (usec): 00:21:43.354 | 1.00th=[ 7701], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10421], 00:21:43.354 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:21:43.354 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11994], 95.00th=[12911], 00:21:43.354 | 99.00th=[15401], 99.50th=[15664], 99.90th=[16057], 99.95th=[16057], 00:21:43.354 | 99.99th=[16057] 00:21:43.354 bw ( KiB/s): min=21064, max=24040, per=35.37%, avg=22552.00, stdev=2104.35, samples=2 00:21:43.354 iops : min= 5266, max= 6010, avg=5638.00, stdev=526.09, samples=2 00:21:43.354 lat (msec) : 2=0.01%, 4=0.15%, 10=12.72%, 20=87.12% 00:21:43.354 cpu : usr=5.98%, sys=24.73%, ctx=683, majf=0, minf=1 00:21:43.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:43.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.354 issued rwts: total=5614,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.354 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.354 job2: (groupid=0, jobs=1): err= 0: pid=75897: Thu Apr 18 15:08:58 2024 00:21:43.354 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:21:43.354 slat (usec): min=10, max=9748, avg=192.79, stdev=969.07 00:21:43.354 clat (usec): min=16138, max=38832, avg=24369.08, stdev=3800.12 00:21:43.354 lat (usec): min=16158, max=40736, avg=24561.88, stdev=3896.79 00:21:43.354 clat percentiles (usec): 00:21:43.354 | 1.00th=[17433], 5.00th=[19792], 10.00th=[20055], 20.00th=[21103], 00:21:43.354 | 30.00th=[21627], 40.00th=[22414], 50.00th=[23462], 60.00th=[24773], 00:21:43.354 | 70.00th=[26346], 80.00th=[27657], 90.00th=[29492], 95.00th=[31327], 00:21:43.354 | 99.00th=[33424], 99.50th=[35390], 99.90th=[37487], 99.95th=[38011], 00:21:43.354 | 99.99th=[39060] 00:21:43.354 write: IOPS=2745, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1006msec); 0 zone resets 00:21:43.354 slat (usec): min=24, max=6222, avg=171.14, stdev=602.13 00:21:43.354 clat (usec): min=3860, max=34223, avg=23371.52, stdev=4184.65 00:21:43.354 lat (usec): min=7053, max=35993, avg=23542.65, stdev=4206.52 00:21:43.354 clat percentiles (usec): 00:21:43.354 | 1.00th=[13173], 5.00th=[15926], 10.00th=[18220], 20.00th=[19530], 00:21:43.354 | 30.00th=[21365], 40.00th=[23200], 50.00th=[24773], 60.00th=[25035], 00:21:43.354 | 70.00th=[25297], 80.00th=[26084], 90.00th=[27657], 95.00th=[29754], 00:21:43.354 | 99.00th=[32637], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:21:43.354 | 99.99th=[34341] 00:21:43.354 bw ( KiB/s): min= 8792, max=12288, per=16.53%, avg=10540.00, stdev=2472.05, samples=2 00:21:43.354 iops : min= 2198, max= 3072, avg=2635.00, stdev=618.01, samples=2 00:21:43.354 lat (msec) : 4=0.02%, 10=0.30%, 20=15.50%, 50=84.18% 00:21:43.354 cpu : usr=3.68%, sys=11.74%, ctx=349, majf=0, minf=1 00:21:43.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:43.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.354 issued rwts: total=2560,2762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.354 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.354 job3: (groupid=0, jobs=1): err= 0: pid=75898: Thu Apr 18 15:08:58 2024 00:21:43.354 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:21:43.354 slat (usec): min=10, max=4246, avg=99.09, stdev=444.67 00:21:43.354 clat (usec): min=9612, max=17505, avg=13314.17, stdev=1249.93 00:21:43.355 lat (usec): min=9642, max=18880, avg=13413.27, stdev=1245.79 00:21:43.355 clat percentiles (usec): 00:21:43.355 | 1.00th=[10421], 5.00th=[10945], 10.00th=[11207], 20.00th=[12256], 00:21:43.355 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:21:43.355 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14615], 95.00th=[14877], 00:21:43.355 | 99.00th=[16057], 99.50th=[16450], 99.90th=[16909], 99.95th=[17171], 00:21:43.355 | 99.99th=[17433] 00:21:43.355 write: IOPS=5061, BW=19.8MiB/s (20.7MB/s)(19.9MiB/1004msec); 0 zone resets 00:21:43.355 slat (usec): min=23, max=3055, avg=94.96, stdev=313.11 00:21:43.355 clat (usec): min=3093, max=17493, avg=12877.19, stdev=1423.52 00:21:43.355 lat (usec): min=3630, max=17528, avg=12972.15, stdev=1406.70 00:21:43.355 clat percentiles (usec): 00:21:43.355 | 1.00th=[ 8717], 5.00th=[10683], 10.00th=[10945], 20.00th=[11338], 00:21:43.355 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:21:43.355 | 70.00th=[13566], 80.00th=[13698], 90.00th=[14091], 95.00th=[14353], 00:21:43.355 | 99.00th=[16188], 99.50th=[16712], 99.90th=[17171], 99.95th=[17171], 00:21:43.355 | 99.99th=[17433] 00:21:43.355 bw ( KiB/s): min=19152, max=20521, per=31.11%, avg=19836.50, stdev=968.03, samples=2 00:21:43.355 iops : min= 4788, max= 5130, avg=4959.00, stdev=241.83, samples=2 00:21:43.355 lat (msec) : 4=0.09%, 10=0.73%, 20=99.17% 00:21:43.355 cpu : usr=6.48%, sys=20.64%, ctx=703, majf=0, minf=1 00:21:43.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:43.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.355 issued rwts: total=4608,5082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.355 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.355 00:21:43.355 Run status group 0 (all jobs): 00:21:43.355 READ: bw=58.6MiB/s (61.4MB/s), 9192KiB/s-21.8MiB/s (9413kB/s-22.9MB/s), io=58.9MiB (61.8MB), run=1003-1006msec 00:21:43.355 WRITE: bw=62.3MiB/s (65.3MB/s), 9.97MiB/s-21.9MiB/s (10.5MB/s-23.0MB/s), io=62.6MiB (65.7MB), run=1003-1006msec 00:21:43.355 00:21:43.355 Disk stats (read/write): 00:21:43.355 nvme0n1: ios=2092/2058, merge=0/0, ticks=13936/11198, in_queue=25134, util=87.54% 00:21:43.355 nvme0n2: ios=4633/5029, merge=0/0, ticks=24618/22698, in_queue=47316, util=88.81% 00:21:43.355 nvme0n3: ios=2048/2560, merge=0/0, ticks=15206/18143, in_queue=33349, util=89.25% 00:21:43.355 nvme0n4: ios=4096/4189, merge=0/0, ticks=16114/14793, in_queue=30907, util=89.60% 00:21:43.355 15:08:58 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:21:43.355 [global] 00:21:43.355 thread=1 00:21:43.355 invalidate=1 00:21:43.355 rw=randwrite 00:21:43.355 time_based=1 00:21:43.355 runtime=1 00:21:43.355 ioengine=libaio 00:21:43.355 direct=1 00:21:43.355 bs=4096 00:21:43.355 iodepth=128 00:21:43.355 norandommap=0 00:21:43.355 numjobs=1 00:21:43.355 00:21:43.355 verify_dump=1 00:21:43.355 verify_backlog=512 00:21:43.355 verify_state_save=0 00:21:43.355 do_verify=1 00:21:43.355 verify=crc32c-intel 00:21:43.355 [job0] 00:21:43.355 filename=/dev/nvme0n1 00:21:43.355 [job1] 00:21:43.355 filename=/dev/nvme0n2 00:21:43.355 [job2] 00:21:43.355 filename=/dev/nvme0n3 00:21:43.355 [job3] 00:21:43.355 filename=/dev/nvme0n4 00:21:43.355 Could not set queue depth (nvme0n1) 00:21:43.355 Could not set queue depth (nvme0n2) 00:21:43.355 Could not set queue depth (nvme0n3) 00:21:43.355 Could not set queue depth (nvme0n4) 00:21:43.355 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:43.355 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:43.355 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:43.355 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:43.355 fio-3.35 00:21:43.355 Starting 4 threads 00:21:44.741 00:21:44.741 job0: (groupid=0, jobs=1): err= 0: pid=75953: Thu Apr 18 15:09:00 2024 00:21:44.741 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:21:44.741 slat (usec): min=9, max=7011, avg=192.47, stdev=921.99 00:21:44.741 clat (usec): min=16059, max=33542, avg=25140.64, stdev=3994.37 00:21:44.741 lat (usec): min=17338, max=34375, avg=25333.11, stdev=3937.83 00:21:44.741 clat percentiles (usec): 00:21:44.741 | 1.00th=[17433], 5.00th=[19268], 10.00th=[20317], 20.00th=[20841], 00:21:44.741 | 30.00th=[21627], 40.00th=[23200], 50.00th=[25822], 60.00th=[27919], 00:21:44.741 | 70.00th=[28967], 80.00th=[29230], 90.00th=[29754], 95.00th=[30016], 00:21:44.741 | 99.00th=[31327], 99.50th=[32113], 99.90th=[33424], 99.95th=[33424], 00:21:44.741 | 99.99th=[33424] 00:21:44.741 write: IOPS=2774, BW=10.8MiB/s (11.4MB/s)(10.9MiB/1003msec); 0 zone resets 00:21:44.741 slat (usec): min=14, max=6808, avg=170.59, stdev=626.42 00:21:44.741 clat (usec): min=1048, max=37066, avg=22257.36, stdev=5828.83 00:21:44.741 lat (usec): min=3373, max=37102, avg=22427.95, stdev=5834.47 00:21:44.741 clat percentiles (usec): 00:21:44.741 | 1.00th=[ 3982], 5.00th=[14746], 10.00th=[15926], 20.00th=[17433], 00:21:44.741 | 30.00th=[17695], 40.00th=[19792], 50.00th=[23200], 60.00th=[24249], 00:21:44.742 | 70.00th=[24773], 80.00th=[27132], 90.00th=[29230], 95.00th=[32637], 00:21:44.742 | 99.00th=[35390], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:21:44.742 | 99.99th=[36963] 00:21:44.742 bw ( KiB/s): min= 9088, max=12152, per=16.91%, avg=10620.00, stdev=2166.58, samples=2 00:21:44.742 iops : min= 2272, max= 3038, avg=2655.00, stdev=541.64, samples=2 00:21:44.742 lat (msec) : 2=0.02%, 4=0.52%, 10=0.67%, 20=24.14%, 50=74.64% 00:21:44.742 cpu : usr=3.79%, sys=10.98%, ctx=322, majf=0, minf=19 00:21:44.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:44.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:44.742 issued rwts: total=2560,2783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:44.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:44.742 job1: (groupid=0, jobs=1): err= 0: pid=75954: Thu Apr 18 15:09:00 2024 00:21:44.742 read: IOPS=5380, BW=21.0MiB/s (22.0MB/s)(21.1MiB/1002msec) 00:21:44.742 slat (usec): min=6, max=3137, avg=88.67, stdev=344.49 00:21:44.742 clat (usec): min=433, max=14445, avg=11672.19, stdev=1281.11 00:21:44.742 lat (usec): min=2562, max=14464, avg=11760.86, stdev=1264.74 00:21:44.742 clat percentiles (usec): 00:21:44.742 | 1.00th=[ 5276], 5.00th=[10028], 10.00th=[10421], 20.00th=[10945], 00:21:44.742 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:21:44.742 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13042], 00:21:44.742 | 99.00th=[13698], 99.50th=[13829], 99.90th=[14091], 99.95th=[14091], 00:21:44.742 | 99.99th=[14484] 00:21:44.742 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:21:44.742 slat (usec): min=8, max=2635, avg=83.31, stdev=292.32 00:21:44.742 clat (usec): min=8192, max=14247, avg=11294.06, stdev=1066.21 00:21:44.742 lat (usec): min=8222, max=14266, avg=11377.37, stdev=1068.59 00:21:44.742 clat percentiles (usec): 00:21:44.742 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10552], 00:21:44.742 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:21:44.742 | 70.00th=[11600], 80.00th=[12256], 90.00th=[12780], 95.00th=[13042], 00:21:44.742 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14091], 99.95th=[14091], 00:21:44.742 | 99.99th=[14222] 00:21:44.742 bw ( KiB/s): min=21867, max=23232, per=35.91%, avg=22549.50, stdev=965.20, samples=2 00:21:44.742 iops : min= 5466, max= 5808, avg=5637.00, stdev=241.83, samples=2 00:21:44.742 lat (usec) : 500=0.01% 00:21:44.742 lat (msec) : 4=0.29%, 10=7.01%, 20=92.69% 00:21:44.742 cpu : usr=5.49%, sys=17.68%, ctx=711, majf=0, minf=7 00:21:44.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:21:44.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:44.742 issued rwts: total=5391,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:44.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:44.742 job2: (groupid=0, jobs=1): err= 0: pid=75955: Thu Apr 18 15:09:00 2024 00:21:44.742 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:21:44.742 slat (usec): min=18, max=2883, avg=99.43, stdev=395.05 00:21:44.742 clat (usec): min=10730, max=23205, avg=13533.74, stdev=1041.00 00:21:44.742 lat (usec): min=10981, max=23226, avg=13633.16, stdev=996.56 00:21:44.742 clat percentiles (usec): 00:21:44.742 | 1.00th=[11207], 5.00th=[11731], 10.00th=[11994], 20.00th=[12649], 00:21:44.742 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13829], 00:21:44.742 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14615], 95.00th=[14746], 00:21:44.742 | 99.00th=[15926], 99.50th=[16188], 99.90th=[19268], 99.95th=[19268], 00:21:44.742 | 99.99th=[23200] 00:21:44.742 write: IOPS=4799, BW=18.7MiB/s (19.7MB/s)(18.9MiB/1007msec); 0 zone resets 00:21:44.742 slat (usec): min=22, max=4744, avg=99.34, stdev=345.47 00:21:44.742 clat (usec): min=4922, max=36985, avg=13389.84, stdev=3157.26 00:21:44.742 lat (usec): min=6284, max=37088, avg=13489.18, stdev=3172.06 00:21:44.742 clat percentiles (usec): 00:21:44.742 | 1.00th=[10814], 5.00th=[11469], 10.00th=[11600], 20.00th=[11994], 00:21:44.742 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12780], 60.00th=[13304], 00:21:44.742 | 70.00th=[13566], 80.00th=[14091], 90.00th=[14484], 95.00th=[15139], 00:21:44.742 | 99.00th=[31327], 99.50th=[34341], 99.90th=[36963], 99.95th=[36963], 00:21:44.742 | 99.99th=[36963] 00:21:44.742 bw ( KiB/s): min=17160, max=20521, per=30.00%, avg=18840.50, stdev=2376.59, samples=2 00:21:44.742 iops : min= 4290, max= 5130, avg=4710.00, stdev=593.97, samples=2 00:21:44.742 lat (msec) : 10=0.11%, 20=98.13%, 50=1.77% 00:21:44.742 cpu : usr=6.06%, sys=20.38%, ctx=657, majf=0, minf=9 00:21:44.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:44.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:44.742 issued rwts: total=4608,4833,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:44.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:44.742 job3: (groupid=0, jobs=1): err= 0: pid=75956: Thu Apr 18 15:09:00 2024 00:21:44.742 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(9.93MiB/1007msec) 00:21:44.742 slat (usec): min=18, max=13825, avg=183.80, stdev=870.91 00:21:44.742 clat (usec): min=20, max=42974, avg=22615.23, stdev=4008.05 00:21:44.742 lat (usec): min=6849, max=42994, avg=22799.04, stdev=4051.38 00:21:44.742 clat percentiles (usec): 00:21:44.742 | 1.00th=[ 7570], 5.00th=[18482], 10.00th=[20055], 20.00th=[20841], 00:21:44.742 | 30.00th=[21627], 40.00th=[21890], 50.00th=[22152], 60.00th=[22414], 00:21:44.742 | 70.00th=[22676], 80.00th=[23987], 90.00th=[26870], 95.00th=[28181], 00:21:44.742 | 99.00th=[37487], 99.50th=[38011], 99.90th=[38536], 99.95th=[41681], 00:21:44.742 | 99.99th=[42730] 00:21:44.742 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:21:44.742 slat (usec): min=24, max=5709, avg=197.13, stdev=614.79 00:21:44.742 clat (usec): min=15999, max=40944, avg=27123.40, stdev=5178.22 00:21:44.742 lat (usec): min=16041, max=40979, avg=27320.53, stdev=5213.70 00:21:44.742 clat percentiles (usec): 00:21:44.742 | 1.00th=[19530], 5.00th=[20317], 10.00th=[20317], 20.00th=[22152], 00:21:44.742 | 30.00th=[23987], 40.00th=[24511], 50.00th=[25822], 60.00th=[27919], 00:21:44.742 | 70.00th=[30016], 80.00th=[32113], 90.00th=[34866], 95.00th=[35390], 00:21:44.742 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:21:44.742 | 99.99th=[41157] 00:21:44.742 bw ( KiB/s): min= 8880, max=11623, per=16.33%, avg=10251.50, stdev=1939.59, samples=2 00:21:44.742 iops : min= 2220, max= 2905, avg=2562.50, stdev=484.37, samples=2 00:21:44.742 lat (usec) : 50=0.02% 00:21:44.742 lat (msec) : 10=0.82%, 20=6.31%, 50=92.85% 00:21:44.742 cpu : usr=3.18%, sys=11.73%, ctx=413, majf=0, minf=16 00:21:44.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:44.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:44.742 issued rwts: total=2542,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:44.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:44.742 00:21:44.742 Run status group 0 (all jobs): 00:21:44.742 READ: bw=58.6MiB/s (61.4MB/s), 9.86MiB/s-21.0MiB/s (10.3MB/s-22.0MB/s), io=59.0MiB (61.9MB), run=1002-1007msec 00:21:44.742 WRITE: bw=61.3MiB/s (64.3MB/s), 9.93MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=61.8MiB (64.7MB), run=1002-1007msec 00:21:44.742 00:21:44.742 Disk stats (read/write): 00:21:44.742 nvme0n1: ios=2098/2551, merge=0/0, ticks=12194/13180, in_queue=25374, util=88.47% 00:21:44.742 nvme0n2: ios=4657/4825, merge=0/0, ticks=12699/11498, in_queue=24197, util=89.80% 00:21:44.742 nvme0n3: ios=4134/4287, merge=0/0, ticks=12472/10765, in_queue=23237, util=90.55% 00:21:44.742 nvme0n4: ios=2065/2407, merge=0/0, ticks=14294/19849, in_queue=34143, util=89.88% 00:21:44.742 15:09:00 -- target/fio.sh@55 -- # sync 00:21:44.742 15:09:00 -- target/fio.sh@59 -- # fio_pid=75975 00:21:44.742 15:09:00 -- target/fio.sh@61 -- # sleep 3 00:21:44.742 15:09:00 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:21:44.742 [global] 00:21:44.742 thread=1 00:21:44.742 invalidate=1 00:21:44.742 rw=read 00:21:44.742 time_based=1 00:21:44.742 runtime=10 00:21:44.742 ioengine=libaio 00:21:44.742 direct=1 00:21:44.742 bs=4096 00:21:44.742 iodepth=1 00:21:44.742 norandommap=1 00:21:44.742 numjobs=1 00:21:44.742 00:21:44.742 [job0] 00:21:44.742 filename=/dev/nvme0n1 00:21:44.742 [job1] 00:21:44.742 filename=/dev/nvme0n2 00:21:44.742 [job2] 00:21:44.742 filename=/dev/nvme0n3 00:21:44.742 [job3] 00:21:44.742 filename=/dev/nvme0n4 00:21:44.742 Could not set queue depth (nvme0n1) 00:21:44.742 Could not set queue depth (nvme0n2) 00:21:44.742 Could not set queue depth (nvme0n3) 00:21:44.742 Could not set queue depth (nvme0n4) 00:21:44.742 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:44.742 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:44.742 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:44.742 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:44.742 fio-3.35 00:21:44.742 Starting 4 threads 00:21:48.029 15:09:03 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:21:48.029 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=28409856, buflen=4096 00:21:48.029 fio: pid=76018, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:48.029 15:09:03 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:21:48.029 fio: pid=76017, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:48.029 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=70295552, buflen=4096 00:21:48.029 15:09:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:48.029 15:09:03 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:21:48.287 fio: pid=76015, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:48.287 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=63569920, buflen=4096 00:21:48.287 15:09:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:48.287 15:09:03 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:21:48.544 fio: pid=76016, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:48.544 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=20901888, buflen=4096 00:21:48.544 00:21:48.544 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76015: Thu Apr 18 15:09:04 2024 00:21:48.544 read: IOPS=4676, BW=18.3MiB/s (19.2MB/s)(60.6MiB/3319msec) 00:21:48.544 slat (usec): min=7, max=13216, avg=13.27, stdev=191.24 00:21:48.544 clat (usec): min=107, max=3729, avg=199.60, stdev=61.94 00:21:48.544 lat (usec): min=116, max=13399, avg=212.88, stdev=200.73 00:21:48.544 clat percentiles (usec): 00:21:48.544 | 1.00th=[ 133], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 153], 00:21:48.544 | 30.00th=[ 159], 40.00th=[ 167], 50.00th=[ 200], 60.00th=[ 227], 00:21:48.544 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 262], 00:21:48.544 | 99.00th=[ 285], 99.50th=[ 318], 99.90th=[ 586], 99.95th=[ 1090], 00:21:48.544 | 99.99th=[ 2073] 00:21:48.544 bw ( KiB/s): min=17679, max=19232, per=27.38%, avg=18794.50, stdev=591.61, samples=6 00:21:48.544 iops : min= 4419, max= 4808, avg=4698.50, stdev=148.19, samples=6 00:21:48.544 lat (usec) : 250=87.83%, 500=12.04%, 750=0.06%, 1000=0.01% 00:21:48.544 lat (msec) : 2=0.04%, 4=0.01% 00:21:48.544 cpu : usr=0.87%, sys=4.10%, ctx=15541, majf=0, minf=1 00:21:48.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:48.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.544 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.544 issued rwts: total=15521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:48.544 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76016: Thu Apr 18 15:09:04 2024 00:21:48.544 read: IOPS=6034, BW=23.6MiB/s (24.7MB/s)(83.9MiB/3561msec) 00:21:48.544 slat (usec): min=7, max=11708, avg=12.38, stdev=151.91 00:21:48.544 clat (usec): min=55, max=3171, avg=152.48, stdev=51.44 00:21:48.544 lat (usec): min=117, max=11968, avg=164.86, stdev=161.05 00:21:48.544 clat percentiles (usec): 00:21:48.544 | 1.00th=[ 119], 5.00th=[ 126], 10.00th=[ 133], 20.00th=[ 137], 00:21:48.544 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 149], 00:21:48.544 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 169], 95.00th=[ 206], 00:21:48.544 | 99.00th=[ 265], 99.50th=[ 314], 99.90th=[ 545], 99.95th=[ 1172], 00:21:48.544 | 99.99th=[ 1909] 00:21:48.544 bw ( KiB/s): min=21393, max=25528, per=35.77%, avg=24553.50, stdev=1595.67, samples=6 00:21:48.544 iops : min= 5348, max= 6382, avg=6138.33, stdev=399.02, samples=6 00:21:48.544 lat (usec) : 100=0.01%, 250=97.98%, 500=1.89%, 750=0.05% 00:21:48.544 lat (msec) : 2=0.05%, 4=0.01% 00:21:48.544 cpu : usr=1.07%, sys=5.17%, ctx=21515, majf=0, minf=1 00:21:48.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:48.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.544 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.544 issued rwts: total=21488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:48.544 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76017: Thu Apr 18 15:09:04 2024 00:21:48.544 read: IOPS=5559, BW=21.7MiB/s (22.8MB/s)(67.0MiB/3087msec) 00:21:48.544 slat (usec): min=6, max=7784, avg=10.36, stdev=82.61 00:21:48.545 clat (usec): min=92, max=3596, avg=168.66, stdev=48.00 00:21:48.545 lat (usec): min=133, max=7989, avg=179.02, stdev=95.89 00:21:48.545 clat percentiles (usec): 00:21:48.545 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:21:48.545 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:21:48.545 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 190], 00:21:48.545 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 306], 99.95th=[ 553], 00:21:48.545 | 99.99th=[ 3359] 00:21:48.545 bw ( KiB/s): min=22480, max=22728, per=32.96%, avg=22624.00, stdev=98.47, samples=5 00:21:48.545 iops : min= 5620, max= 5682, avg=5656.00, stdev=24.62, samples=5 00:21:48.545 lat (usec) : 100=0.01%, 250=98.42%, 500=1.51%, 750=0.03%, 1000=0.01% 00:21:48.545 lat (msec) : 2=0.01%, 4=0.02% 00:21:48.545 cpu : usr=1.13%, sys=4.34%, ctx=17167, majf=0, minf=1 00:21:48.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:48.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.545 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.545 issued rwts: total=17163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:48.545 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76018: Thu Apr 18 15:09:04 2024 00:21:48.545 read: IOPS=2439, BW=9755KiB/s (9989kB/s)(27.1MiB/2844msec) 00:21:48.545 slat (usec): min=7, max=154, avg=19.40, stdev= 7.11 00:21:48.545 clat (usec): min=145, max=7475, avg=388.85, stdev=132.75 00:21:48.545 lat (usec): min=158, max=7495, avg=408.25, stdev=133.13 00:21:48.545 clat percentiles (usec): 00:21:48.545 | 1.00th=[ 231], 5.00th=[ 265], 10.00th=[ 338], 20.00th=[ 371], 00:21:48.545 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 392], 60.00th=[ 396], 00:21:48.545 | 70.00th=[ 404], 80.00th=[ 412], 90.00th=[ 424], 95.00th=[ 437], 00:21:48.545 | 99.00th=[ 474], 99.50th=[ 586], 99.90th=[ 1975], 99.95th=[ 2057], 00:21:48.545 | 99.99th=[ 7504] 00:21:48.545 bw ( KiB/s): min= 9160, max= 9824, per=14.00%, avg=9611.20, stdev=265.37, samples=5 00:21:48.545 iops : min= 2290, max= 2456, avg=2402.80, stdev=66.34, samples=5 00:21:48.545 lat (usec) : 250=3.17%, 500=96.08%, 750=0.35%, 1000=0.12% 00:21:48.545 lat (msec) : 2=0.20%, 4=0.04%, 10=0.03% 00:21:48.545 cpu : usr=1.09%, sys=4.12%, ctx=6938, majf=0, minf=2 00:21:48.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:48.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.545 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:48.545 issued rwts: total=6937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:48.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:48.545 00:21:48.545 Run status group 0 (all jobs): 00:21:48.545 READ: bw=67.0MiB/s (70.3MB/s), 9755KiB/s-23.6MiB/s (9989kB/s-24.7MB/s), io=239MiB (250MB), run=2844-3561msec 00:21:48.545 00:21:48.545 Disk stats (read/write): 00:21:48.545 nvme0n1: ios=14613/0, merge=0/0, ticks=2946/0, in_queue=2946, util=94.70% 00:21:48.545 nvme0n2: ios=20101/0, merge=0/0, ticks=3127/0, in_queue=3127, util=95.04% 00:21:48.545 nvme0n3: ios=16014/0, merge=0/0, ticks=2729/0, in_queue=2729, util=96.50% 00:21:48.545 nvme0n4: ios=6290/0, merge=0/0, ticks=2491/0, in_queue=2491, util=96.36% 00:21:48.545 15:09:04 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:48.545 15:09:04 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:21:48.804 15:09:04 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:48.804 15:09:04 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:21:49.062 15:09:04 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:49.062 15:09:04 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:21:49.062 15:09:04 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:49.063 15:09:04 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:21:49.321 15:09:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:49.321 15:09:05 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:21:49.579 15:09:05 -- target/fio.sh@69 -- # fio_status=0 00:21:49.579 15:09:05 -- target/fio.sh@70 -- # wait 75975 00:21:49.579 15:09:05 -- target/fio.sh@70 -- # fio_status=4 00:21:49.579 15:09:05 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:49.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:49.579 15:09:05 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:49.579 15:09:05 -- common/autotest_common.sh@1205 -- # local i=0 00:21:49.579 15:09:05 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:49.579 15:09:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:49.837 15:09:05 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:49.837 15:09:05 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:49.837 nvmf hotplug test: fio failed as expected 00:21:49.837 15:09:05 -- common/autotest_common.sh@1217 -- # return 0 00:21:49.837 15:09:05 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:49.837 15:09:05 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:49.837 15:09:05 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:49.837 15:09:05 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:21:49.837 15:09:05 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:21:50.095 15:09:05 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:21:50.095 15:09:05 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:21:50.095 15:09:05 -- target/fio.sh@91 -- # nvmftestfini 00:21:50.095 15:09:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:50.095 15:09:05 -- nvmf/common.sh@117 -- # sync 00:21:50.095 15:09:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.095 15:09:05 -- nvmf/common.sh@120 -- # set +e 00:21:50.095 15:09:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.095 15:09:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.095 rmmod nvme_tcp 00:21:50.096 rmmod nvme_fabrics 00:21:50.096 rmmod nvme_keyring 00:21:50.096 15:09:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.096 15:09:05 -- nvmf/common.sh@124 -- # set -e 00:21:50.096 15:09:05 -- nvmf/common.sh@125 -- # return 0 00:21:50.096 15:09:05 -- nvmf/common.sh@478 -- # '[' -n 75486 ']' 00:21:50.096 15:09:05 -- nvmf/common.sh@479 -- # killprocess 75486 00:21:50.096 15:09:05 -- common/autotest_common.sh@936 -- # '[' -z 75486 ']' 00:21:50.096 15:09:05 -- common/autotest_common.sh@940 -- # kill -0 75486 00:21:50.096 15:09:05 -- common/autotest_common.sh@941 -- # uname 00:21:50.096 15:09:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:50.096 15:09:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75486 00:21:50.096 killing process with pid 75486 00:21:50.096 15:09:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:50.096 15:09:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:50.096 15:09:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75486' 00:21:50.096 15:09:05 -- common/autotest_common.sh@955 -- # kill 75486 00:21:50.096 15:09:05 -- common/autotest_common.sh@960 -- # wait 75486 00:21:50.354 15:09:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:50.354 15:09:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:50.354 15:09:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:50.354 15:09:05 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.354 15:09:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:50.354 15:09:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.354 15:09:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.354 15:09:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.354 15:09:05 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:50.355 ************************************ 00:21:50.355 END TEST nvmf_fio_target 00:21:50.355 ************************************ 00:21:50.355 00:21:50.355 real 0m18.954s 00:21:50.355 user 1m11.567s 00:21:50.355 sys 0m9.213s 00:21:50.355 15:09:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:50.355 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:21:50.355 15:09:06 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:50.355 15:09:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:50.355 15:09:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:50.355 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:21:50.619 ************************************ 00:21:50.619 START TEST nvmf_bdevio 00:21:50.619 ************************************ 00:21:50.619 15:09:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:50.619 * Looking for test storage... 00:21:50.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:50.619 15:09:06 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:50.619 15:09:06 -- nvmf/common.sh@7 -- # uname -s 00:21:50.619 15:09:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.619 15:09:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.619 15:09:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.619 15:09:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.619 15:09:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.619 15:09:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.619 15:09:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.619 15:09:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.619 15:09:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.619 15:09:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.619 15:09:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:21:50.619 15:09:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:21:50.620 15:09:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.620 15:09:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.620 15:09:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:50.620 15:09:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.620 15:09:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:50.620 15:09:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.620 15:09:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.620 15:09:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.620 15:09:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.620 15:09:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.621 15:09:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.621 15:09:06 -- paths/export.sh@5 -- # export PATH 00:21:50.621 15:09:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.621 15:09:06 -- nvmf/common.sh@47 -- # : 0 00:21:50.621 15:09:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:50.621 15:09:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:50.621 15:09:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.621 15:09:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.621 15:09:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.621 15:09:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:50.621 15:09:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:50.621 15:09:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:50.621 15:09:06 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:50.621 15:09:06 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:50.621 15:09:06 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:50.623 15:09:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:50.623 15:09:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.623 15:09:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:50.623 15:09:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:50.623 15:09:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:50.623 15:09:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.623 15:09:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.623 15:09:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.623 15:09:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:50.623 15:09:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:50.623 15:09:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:50.623 15:09:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:50.623 15:09:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:50.623 15:09:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:50.624 15:09:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.624 15:09:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.624 15:09:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:50.624 15:09:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:50.624 15:09:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:50.624 15:09:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:50.624 15:09:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:50.624 15:09:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.624 15:09:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:50.624 15:09:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:50.624 15:09:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:50.624 15:09:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:50.624 15:09:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:50.624 15:09:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:50.624 Cannot find device "nvmf_tgt_br" 00:21:50.624 15:09:06 -- nvmf/common.sh@155 -- # true 00:21:50.624 15:09:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:50.624 Cannot find device "nvmf_tgt_br2" 00:21:50.624 15:09:06 -- nvmf/common.sh@156 -- # true 00:21:50.625 15:09:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:50.888 15:09:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:50.888 Cannot find device "nvmf_tgt_br" 00:21:50.888 15:09:06 -- nvmf/common.sh@158 -- # true 00:21:50.888 15:09:06 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:50.888 Cannot find device "nvmf_tgt_br2" 00:21:50.888 15:09:06 -- nvmf/common.sh@159 -- # true 00:21:50.888 15:09:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:50.888 15:09:06 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:50.888 15:09:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:50.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:50.888 15:09:06 -- nvmf/common.sh@162 -- # true 00:21:50.888 15:09:06 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:50.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:50.888 15:09:06 -- nvmf/common.sh@163 -- # true 00:21:50.888 15:09:06 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:50.888 15:09:06 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:50.888 15:09:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:50.888 15:09:06 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:50.888 15:09:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:50.888 15:09:06 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:50.888 15:09:06 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:50.889 15:09:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:50.889 15:09:06 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:50.889 15:09:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:50.889 15:09:06 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:50.889 15:09:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:50.889 15:09:06 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:50.889 15:09:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:50.889 15:09:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:50.889 15:09:06 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:50.889 15:09:06 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:50.889 15:09:06 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:50.889 15:09:06 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:51.148 15:09:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:51.148 15:09:06 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:51.148 15:09:06 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:51.148 15:09:06 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:51.148 15:09:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:51.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:21:51.148 00:21:51.148 --- 10.0.0.2 ping statistics --- 00:21:51.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.148 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:21:51.148 15:09:06 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:51.148 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:51.148 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:21:51.148 00:21:51.148 --- 10.0.0.3 ping statistics --- 00:21:51.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.148 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:21:51.148 15:09:06 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:51.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:21:51.148 00:21:51.148 --- 10.0.0.1 ping statistics --- 00:21:51.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.148 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:21:51.148 15:09:06 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.148 15:09:06 -- nvmf/common.sh@422 -- # return 0 00:21:51.148 15:09:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:51.148 15:09:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.148 15:09:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:51.148 15:09:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:51.148 15:09:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.148 15:09:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:51.148 15:09:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:51.148 15:09:06 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:51.148 15:09:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:51.148 15:09:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:51.148 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:21:51.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.148 15:09:06 -- nvmf/common.sh@470 -- # nvmfpid=76351 00:21:51.148 15:09:06 -- nvmf/common.sh@471 -- # waitforlisten 76351 00:21:51.148 15:09:06 -- common/autotest_common.sh@817 -- # '[' -z 76351 ']' 00:21:51.148 15:09:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.148 15:09:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:51.148 15:09:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.148 15:09:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:51.148 15:09:06 -- common/autotest_common.sh@10 -- # set +x 00:21:51.148 15:09:06 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:51.148 [2024-04-18 15:09:06.769498] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:21:51.148 [2024-04-18 15:09:06.769617] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.408 [2024-04-18 15:09:06.918137] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.408 [2024-04-18 15:09:07.015371] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.408 [2024-04-18 15:09:07.015886] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.408 [2024-04-18 15:09:07.016355] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.408 [2024-04-18 15:09:07.016693] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.408 [2024-04-18 15:09:07.016873] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.408 [2024-04-18 15:09:07.017221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:51.408 [2024-04-18 15:09:07.017417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:51.408 [2024-04-18 15:09:07.017609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:51.408 [2024-04-18 15:09:07.017554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.347 15:09:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:52.347 15:09:07 -- common/autotest_common.sh@850 -- # return 0 00:21:52.347 15:09:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:52.347 15:09:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:52.347 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:21:52.347 15:09:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.347 15:09:07 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.347 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.347 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:21:52.347 [2024-04-18 15:09:07.765307] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.347 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.347 15:09:07 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:52.347 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.347 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:21:52.347 Malloc0 00:21:52.347 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.347 15:09:07 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:52.347 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.347 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:21:52.347 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.347 15:09:07 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:52.347 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.347 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:21:52.347 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.347 15:09:07 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.347 15:09:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.347 15:09:07 -- common/autotest_common.sh@10 -- # set +x 00:21:52.347 [2024-04-18 15:09:07.843337] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.347 15:09:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.347 15:09:07 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:52.347 15:09:07 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:52.347 15:09:07 -- nvmf/common.sh@521 -- # config=() 00:21:52.347 15:09:07 -- nvmf/common.sh@521 -- # local subsystem config 00:21:52.347 15:09:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:52.347 15:09:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:52.347 { 00:21:52.347 "params": { 00:21:52.347 "name": "Nvme$subsystem", 00:21:52.347 "trtype": "$TEST_TRANSPORT", 00:21:52.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.347 "adrfam": "ipv4", 00:21:52.347 "trsvcid": "$NVMF_PORT", 00:21:52.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.347 "hdgst": ${hdgst:-false}, 00:21:52.347 "ddgst": ${ddgst:-false} 00:21:52.347 }, 00:21:52.347 "method": "bdev_nvme_attach_controller" 00:21:52.347 } 00:21:52.347 EOF 00:21:52.347 )") 00:21:52.347 15:09:07 -- nvmf/common.sh@543 -- # cat 00:21:52.347 15:09:07 -- nvmf/common.sh@545 -- # jq . 00:21:52.347 15:09:07 -- nvmf/common.sh@546 -- # IFS=, 00:21:52.347 15:09:07 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:52.347 "params": { 00:21:52.347 "name": "Nvme1", 00:21:52.347 "trtype": "tcp", 00:21:52.347 "traddr": "10.0.0.2", 00:21:52.347 "adrfam": "ipv4", 00:21:52.347 "trsvcid": "4420", 00:21:52.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:52.347 "hdgst": false, 00:21:52.347 "ddgst": false 00:21:52.347 }, 00:21:52.347 "method": "bdev_nvme_attach_controller" 00:21:52.347 }' 00:21:52.347 [2024-04-18 15:09:07.899776] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:21:52.347 [2024-04-18 15:09:07.899851] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76405 ] 00:21:52.347 [2024-04-18 15:09:08.047551] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:52.606 [2024-04-18 15:09:08.172623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.606 [2024-04-18 15:09:08.173980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.607 [2024-04-18 15:09:08.173982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.887 I/O targets: 00:21:52.887 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:52.887 00:21:52.887 00:21:52.887 CUnit - A unit testing framework for C - Version 2.1-3 00:21:52.887 http://cunit.sourceforge.net/ 00:21:52.887 00:21:52.887 00:21:52.887 Suite: bdevio tests on: Nvme1n1 00:21:52.887 Test: blockdev write read block ...passed 00:21:52.887 Test: blockdev write zeroes read block ...passed 00:21:52.887 Test: blockdev write zeroes read no split ...passed 00:21:52.887 Test: blockdev write zeroes read split ...passed 00:21:52.887 Test: blockdev write zeroes read split partial ...passed 00:21:52.887 Test: blockdev reset ...[2024-04-18 15:09:08.460303] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:52.887 [2024-04-18 15:09:08.460744] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1553610 (9): Bad file descriptor 00:21:52.887 [2024-04-18 15:09:08.480928] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:52.887 passed 00:21:52.887 Test: blockdev write read 8 blocks ...passed 00:21:52.887 Test: blockdev write read size > 128k ...passed 00:21:52.887 Test: blockdev write read invalid size ...passed 00:21:52.887 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:52.887 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:52.887 Test: blockdev write read max offset ...passed 00:21:53.149 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:53.149 Test: blockdev writev readv 8 blocks ...passed 00:21:53.149 Test: blockdev writev readv 30 x 1block ...passed 00:21:53.149 Test: blockdev writev readv block ...passed 00:21:53.149 Test: blockdev writev readv size > 128k ...passed 00:21:53.149 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:53.149 Test: blockdev comparev and writev ...[2024-04-18 15:09:08.660453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.149 [2024-04-18 15:09:08.660938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:53.149 [2024-04-18 15:09:08.661059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.149 [2024-04-18 15:09:08.661149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:53.149 [2024-04-18 15:09:08.661493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.149 [2024-04-18 15:09:08.661640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:53.149 [2024-04-18 15:09:08.661727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.149 [2024-04-18 15:09:08.661799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:53.149 [2024-04-18 15:09:08.662216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.149 [2024-04-18 15:09:08.662397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:53.149 [2024-04-18 15:09:08.662516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.149 [2024-04-18 15:09:08.662579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:53.149 [2024-04-18 15:09:08.662936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.149 [2024-04-18 15:09:08.663142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:53.149 [2024-04-18 15:09:08.663341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:53.149 [2024-04-18 15:09:08.663560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:53.149 passed 00:21:53.149 Test: blockdev nvme passthru rw ...passed 00:21:53.149 Test: blockdev nvme passthru vendor specific ...[2024-04-18 15:09:08.746924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:53.149 [2024-04-18 15:09:08.747150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:53.149 [2024-04-18 15:09:08.747327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:53.149 [2024-04-18 15:09:08.747397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:53.149 [2024-04-18 15:09:08.747550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:53.149 [2024-04-18 15:09:08.747628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:53.149 [2024-04-18 15:09:08.747774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:53.149 passed 00:21:53.149 Test: blockdev nvme admin passthru ...[2024-04-18 15:09:08.747944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:53.149 passed 00:21:53.149 Test: blockdev copy ...passed 00:21:53.149 00:21:53.149 Run Summary: Type Total Ran Passed Failed Inactive 00:21:53.149 suites 1 1 n/a 0 0 00:21:53.149 tests 23 23 23 0 0 00:21:53.149 asserts 152 152 152 0 n/a 00:21:53.149 00:21:53.149 Elapsed time = 0.921 seconds 00:21:53.409 15:09:09 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:53.409 15:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:53.409 15:09:09 -- common/autotest_common.sh@10 -- # set +x 00:21:53.409 15:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:53.409 15:09:09 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:53.409 15:09:09 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:53.409 15:09:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:53.409 15:09:09 -- nvmf/common.sh@117 -- # sync 00:21:53.409 15:09:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:53.409 15:09:09 -- nvmf/common.sh@120 -- # set +e 00:21:53.409 15:09:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:53.409 15:09:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:53.409 rmmod nvme_tcp 00:21:53.669 rmmod nvme_fabrics 00:21:53.669 rmmod nvme_keyring 00:21:53.669 15:09:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:53.669 15:09:09 -- nvmf/common.sh@124 -- # set -e 00:21:53.669 15:09:09 -- nvmf/common.sh@125 -- # return 0 00:21:53.669 15:09:09 -- nvmf/common.sh@478 -- # '[' -n 76351 ']' 00:21:53.669 15:09:09 -- nvmf/common.sh@479 -- # killprocess 76351 00:21:53.669 15:09:09 -- common/autotest_common.sh@936 -- # '[' -z 76351 ']' 00:21:53.669 15:09:09 -- common/autotest_common.sh@940 -- # kill -0 76351 00:21:53.669 15:09:09 -- common/autotest_common.sh@941 -- # uname 00:21:53.669 15:09:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:53.669 15:09:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76351 00:21:53.669 killing process with pid 76351 00:21:53.669 15:09:09 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:21:53.669 15:09:09 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:21:53.669 15:09:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76351' 00:21:53.669 15:09:09 -- common/autotest_common.sh@955 -- # kill 76351 00:21:53.669 15:09:09 -- common/autotest_common.sh@960 -- # wait 76351 00:21:53.928 15:09:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:53.928 15:09:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:53.928 15:09:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:53.928 15:09:09 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:53.928 15:09:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:53.928 15:09:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.928 15:09:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.928 15:09:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.928 15:09:09 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:53.928 00:21:53.928 real 0m3.446s 00:21:53.928 user 0m11.673s 00:21:53.928 sys 0m0.961s 00:21:53.928 15:09:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:53.928 ************************************ 00:21:53.928 END TEST nvmf_bdevio 00:21:53.928 ************************************ 00:21:53.928 15:09:09 -- common/autotest_common.sh@10 -- # set +x 00:21:53.928 15:09:09 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:21:53.928 15:09:09 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:53.928 15:09:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:21:53.928 15:09:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:53.928 15:09:09 -- common/autotest_common.sh@10 -- # set +x 00:21:54.188 ************************************ 00:21:54.188 START TEST nvmf_bdevio_no_huge 00:21:54.188 ************************************ 00:21:54.188 15:09:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:54.188 * Looking for test storage... 00:21:54.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:54.188 15:09:09 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:54.188 15:09:09 -- nvmf/common.sh@7 -- # uname -s 00:21:54.188 15:09:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.188 15:09:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.188 15:09:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.188 15:09:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.188 15:09:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.188 15:09:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.188 15:09:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.188 15:09:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.188 15:09:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.188 15:09:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.188 15:09:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:21:54.188 15:09:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:21:54.188 15:09:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.188 15:09:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.188 15:09:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:54.188 15:09:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.188 15:09:09 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:54.188 15:09:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.188 15:09:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.188 15:09:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.188 15:09:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.188 15:09:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.188 15:09:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.188 15:09:09 -- paths/export.sh@5 -- # export PATH 00:21:54.188 15:09:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.188 15:09:09 -- nvmf/common.sh@47 -- # : 0 00:21:54.188 15:09:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:54.188 15:09:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:54.188 15:09:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.188 15:09:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.188 15:09:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.188 15:09:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:54.188 15:09:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:54.188 15:09:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:54.188 15:09:09 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:54.188 15:09:09 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:54.188 15:09:09 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:54.188 15:09:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:54.188 15:09:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.188 15:09:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:54.188 15:09:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:54.188 15:09:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:54.188 15:09:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.188 15:09:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.188 15:09:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.188 15:09:09 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:54.188 15:09:09 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:54.188 15:09:09 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:54.188 15:09:09 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:54.188 15:09:09 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:54.188 15:09:09 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:54.188 15:09:09 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.188 15:09:09 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.188 15:09:09 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:54.188 15:09:09 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:54.188 15:09:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:54.188 15:09:09 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:54.188 15:09:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:54.188 15:09:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.188 15:09:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:54.188 15:09:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:54.188 15:09:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:54.188 15:09:09 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:54.188 15:09:09 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:54.447 15:09:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:54.447 Cannot find device "nvmf_tgt_br" 00:21:54.447 15:09:09 -- nvmf/common.sh@155 -- # true 00:21:54.447 15:09:09 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:54.447 Cannot find device "nvmf_tgt_br2" 00:21:54.447 15:09:09 -- nvmf/common.sh@156 -- # true 00:21:54.447 15:09:09 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:54.447 15:09:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:54.447 Cannot find device "nvmf_tgt_br" 00:21:54.447 15:09:09 -- nvmf/common.sh@158 -- # true 00:21:54.447 15:09:09 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:54.447 Cannot find device "nvmf_tgt_br2" 00:21:54.447 15:09:09 -- nvmf/common.sh@159 -- # true 00:21:54.447 15:09:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:54.447 15:09:10 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:54.447 15:09:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:54.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:54.447 15:09:10 -- nvmf/common.sh@162 -- # true 00:21:54.447 15:09:10 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:54.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:54.447 15:09:10 -- nvmf/common.sh@163 -- # true 00:21:54.447 15:09:10 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:54.447 15:09:10 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:54.447 15:09:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:54.447 15:09:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:54.447 15:09:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:54.447 15:09:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:54.447 15:09:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:54.447 15:09:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:54.447 15:09:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:54.447 15:09:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:54.447 15:09:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:54.447 15:09:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:54.447 15:09:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:54.706 15:09:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:54.706 15:09:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:54.706 15:09:10 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:54.706 15:09:10 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:54.706 15:09:10 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:54.707 15:09:10 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:54.707 15:09:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:54.707 15:09:10 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:54.707 15:09:10 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:54.707 15:09:10 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:54.707 15:09:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:54.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:21:54.707 00:21:54.707 --- 10.0.0.2 ping statistics --- 00:21:54.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.707 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:21:54.707 15:09:10 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:54.707 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:54.707 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:21:54.707 00:21:54.707 --- 10.0.0.3 ping statistics --- 00:21:54.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.707 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:21:54.707 15:09:10 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:54.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:21:54.707 00:21:54.707 --- 10.0.0.1 ping statistics --- 00:21:54.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.707 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:54.707 15:09:10 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.707 15:09:10 -- nvmf/common.sh@422 -- # return 0 00:21:54.707 15:09:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:54.707 15:09:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.707 15:09:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:54.707 15:09:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:54.707 15:09:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.707 15:09:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:54.707 15:09:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:54.707 15:09:10 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:54.707 15:09:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:54.707 15:09:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:54.707 15:09:10 -- common/autotest_common.sh@10 -- # set +x 00:21:54.707 15:09:10 -- nvmf/common.sh@470 -- # nvmfpid=76596 00:21:54.707 15:09:10 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:54.707 15:09:10 -- nvmf/common.sh@471 -- # waitforlisten 76596 00:21:54.707 15:09:10 -- common/autotest_common.sh@817 -- # '[' -z 76596 ']' 00:21:54.707 15:09:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.707 15:09:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:54.707 15:09:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.707 15:09:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:54.707 15:09:10 -- common/autotest_common.sh@10 -- # set +x 00:21:54.707 [2024-04-18 15:09:10.349422] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:21:54.707 [2024-04-18 15:09:10.349534] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:54.966 [2024-04-18 15:09:10.498618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:54.966 [2024-04-18 15:09:10.643980] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.966 [2024-04-18 15:09:10.644043] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.966 [2024-04-18 15:09:10.644054] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.966 [2024-04-18 15:09:10.644063] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.966 [2024-04-18 15:09:10.644071] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.966 [2024-04-18 15:09:10.644264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:54.966 [2024-04-18 15:09:10.644964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:54.966 [2024-04-18 15:09:10.645067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:54.966 [2024-04-18 15:09:10.645067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.904 15:09:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:55.904 15:09:11 -- common/autotest_common.sh@850 -- # return 0 00:21:55.904 15:09:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:55.904 15:09:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:55.904 15:09:11 -- common/autotest_common.sh@10 -- # set +x 00:21:55.904 15:09:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.904 15:09:11 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:55.904 15:09:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.904 15:09:11 -- common/autotest_common.sh@10 -- # set +x 00:21:55.904 [2024-04-18 15:09:11.323935] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.904 15:09:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.904 15:09:11 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:55.904 15:09:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.904 15:09:11 -- common/autotest_common.sh@10 -- # set +x 00:21:55.904 Malloc0 00:21:55.904 15:09:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.904 15:09:11 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:55.904 15:09:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.904 15:09:11 -- common/autotest_common.sh@10 -- # set +x 00:21:55.904 15:09:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.904 15:09:11 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:55.904 15:09:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.904 15:09:11 -- common/autotest_common.sh@10 -- # set +x 00:21:55.904 15:09:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.904 15:09:11 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:55.904 15:09:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:55.904 15:09:11 -- common/autotest_common.sh@10 -- # set +x 00:21:55.904 [2024-04-18 15:09:11.376311] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.904 15:09:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:55.904 15:09:11 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:55.904 15:09:11 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:55.904 15:09:11 -- nvmf/common.sh@521 -- # config=() 00:21:55.904 15:09:11 -- nvmf/common.sh@521 -- # local subsystem config 00:21:55.904 15:09:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:55.904 15:09:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:55.904 { 00:21:55.904 "params": { 00:21:55.904 "name": "Nvme$subsystem", 00:21:55.904 "trtype": "$TEST_TRANSPORT", 00:21:55.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:55.904 "adrfam": "ipv4", 00:21:55.904 "trsvcid": "$NVMF_PORT", 00:21:55.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:55.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:55.904 "hdgst": ${hdgst:-false}, 00:21:55.904 "ddgst": ${ddgst:-false} 00:21:55.904 }, 00:21:55.904 "method": "bdev_nvme_attach_controller" 00:21:55.904 } 00:21:55.904 EOF 00:21:55.904 )") 00:21:55.904 15:09:11 -- nvmf/common.sh@543 -- # cat 00:21:55.904 15:09:11 -- nvmf/common.sh@545 -- # jq . 00:21:55.904 15:09:11 -- nvmf/common.sh@546 -- # IFS=, 00:21:55.904 15:09:11 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:55.904 "params": { 00:21:55.904 "name": "Nvme1", 00:21:55.904 "trtype": "tcp", 00:21:55.904 "traddr": "10.0.0.2", 00:21:55.904 "adrfam": "ipv4", 00:21:55.904 "trsvcid": "4420", 00:21:55.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:55.904 "hdgst": false, 00:21:55.904 "ddgst": false 00:21:55.904 }, 00:21:55.904 "method": "bdev_nvme_attach_controller" 00:21:55.904 }' 00:21:55.904 [2024-04-18 15:09:11.432900] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:21:55.904 [2024-04-18 15:09:11.432967] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76650 ] 00:21:55.904 [2024-04-18 15:09:11.574476] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:56.163 [2024-04-18 15:09:11.739031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.163 [2024-04-18 15:09:11.739147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.163 [2024-04-18 15:09:11.739148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.423 I/O targets: 00:21:56.423 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:56.423 00:21:56.423 00:21:56.423 CUnit - A unit testing framework for C - Version 2.1-3 00:21:56.423 http://cunit.sourceforge.net/ 00:21:56.423 00:21:56.423 00:21:56.423 Suite: bdevio tests on: Nvme1n1 00:21:56.423 Test: blockdev write read block ...passed 00:21:56.423 Test: blockdev write zeroes read block ...passed 00:21:56.423 Test: blockdev write zeroes read no split ...passed 00:21:56.423 Test: blockdev write zeroes read split ...passed 00:21:56.423 Test: blockdev write zeroes read split partial ...passed 00:21:56.423 Test: blockdev reset ...[2024-04-18 15:09:12.101384] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:56.423 [2024-04-18 15:09:12.101511] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdcc180 (9): Bad file descriptor 00:21:56.423 [2024-04-18 15:09:12.118142] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:56.423 passed 00:21:56.423 Test: blockdev write read 8 blocks ...passed 00:21:56.423 Test: blockdev write read size > 128k ...passed 00:21:56.423 Test: blockdev write read invalid size ...passed 00:21:56.682 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:56.682 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:56.682 Test: blockdev write read max offset ...passed 00:21:56.682 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:56.682 Test: blockdev writev readv 8 blocks ...passed 00:21:56.682 Test: blockdev writev readv 30 x 1block ...passed 00:21:56.682 Test: blockdev writev readv block ...passed 00:21:56.682 Test: blockdev writev readv size > 128k ...passed 00:21:56.682 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:56.682 Test: blockdev comparev and writev ...[2024-04-18 15:09:12.293500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:56.682 [2024-04-18 15:09:12.293554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:56.682 [2024-04-18 15:09:12.293572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:56.682 [2024-04-18 15:09:12.293584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:56.682 [2024-04-18 15:09:12.294076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:56.682 [2024-04-18 15:09:12.294095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:56.682 [2024-04-18 15:09:12.294110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:56.682 [2024-04-18 15:09:12.294120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:56.682 [2024-04-18 15:09:12.294611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:56.682 [2024-04-18 15:09:12.294627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:56.682 [2024-04-18 15:09:12.294651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:56.682 [2024-04-18 15:09:12.294661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:56.682 [2024-04-18 15:09:12.295158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:56.682 [2024-04-18 15:09:12.295181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:56.682 [2024-04-18 15:09:12.295196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:56.682 [2024-04-18 15:09:12.295206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:56.682 passed 00:21:56.682 Test: blockdev nvme passthru rw ...passed 00:21:56.682 Test: blockdev nvme passthru vendor specific ...[2024-04-18 15:09:12.378910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:56.682 [2024-04-18 15:09:12.378968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:56.682 [2024-04-18 15:09:12.379081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:56.682 [2024-04-18 15:09:12.379094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:56.682 [2024-04-18 15:09:12.379181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:56.682 [2024-04-18 15:09:12.379193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:56.682 [2024-04-18 15:09:12.379283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:56.682 [2024-04-18 15:09:12.379295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:56.682 passed 00:21:56.942 Test: blockdev nvme admin passthru ...passed 00:21:56.942 Test: blockdev copy ...passed 00:21:56.942 00:21:56.942 Run Summary: Type Total Ran Passed Failed Inactive 00:21:56.942 suites 1 1 n/a 0 0 00:21:56.942 tests 23 23 23 0 0 00:21:56.942 asserts 152 152 152 0 n/a 00:21:56.942 00:21:56.942 Elapsed time = 0.964 seconds 00:21:57.511 15:09:12 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:57.511 15:09:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.511 15:09:12 -- common/autotest_common.sh@10 -- # set +x 00:21:57.511 15:09:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.511 15:09:12 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:57.511 15:09:12 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:57.511 15:09:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:57.511 15:09:12 -- nvmf/common.sh@117 -- # sync 00:21:57.511 15:09:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:57.511 15:09:13 -- nvmf/common.sh@120 -- # set +e 00:21:57.511 15:09:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:57.511 15:09:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:57.511 rmmod nvme_tcp 00:21:57.511 rmmod nvme_fabrics 00:21:57.511 rmmod nvme_keyring 00:21:57.511 15:09:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:57.511 15:09:13 -- nvmf/common.sh@124 -- # set -e 00:21:57.511 15:09:13 -- nvmf/common.sh@125 -- # return 0 00:21:57.511 15:09:13 -- nvmf/common.sh@478 -- # '[' -n 76596 ']' 00:21:57.511 15:09:13 -- nvmf/common.sh@479 -- # killprocess 76596 00:21:57.511 15:09:13 -- common/autotest_common.sh@936 -- # '[' -z 76596 ']' 00:21:57.511 15:09:13 -- common/autotest_common.sh@940 -- # kill -0 76596 00:21:57.511 15:09:13 -- common/autotest_common.sh@941 -- # uname 00:21:57.511 15:09:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:57.511 15:09:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76596 00:21:57.511 15:09:13 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:21:57.511 15:09:13 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:21:57.511 killing process with pid 76596 00:21:57.511 15:09:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76596' 00:21:57.511 15:09:13 -- common/autotest_common.sh@955 -- # kill 76596 00:21:57.511 15:09:13 -- common/autotest_common.sh@960 -- # wait 76596 00:21:58.079 15:09:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:58.079 15:09:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:58.079 15:09:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:58.079 15:09:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.079 15:09:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:58.079 15:09:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.079 15:09:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.079 15:09:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.079 15:09:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:58.079 00:21:58.079 real 0m3.895s 00:21:58.079 user 0m13.587s 00:21:58.079 sys 0m1.715s 00:21:58.079 15:09:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:58.079 ************************************ 00:21:58.079 END TEST nvmf_bdevio_no_huge 00:21:58.079 ************************************ 00:21:58.079 15:09:13 -- common/autotest_common.sh@10 -- # set +x 00:21:58.079 15:09:13 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:58.079 15:09:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:58.079 15:09:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:58.079 15:09:13 -- common/autotest_common.sh@10 -- # set +x 00:21:58.079 ************************************ 00:21:58.079 START TEST nvmf_tls 00:21:58.079 ************************************ 00:21:58.079 15:09:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:58.340 * Looking for test storage... 00:21:58.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:58.340 15:09:13 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:58.340 15:09:13 -- nvmf/common.sh@7 -- # uname -s 00:21:58.340 15:09:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.340 15:09:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.340 15:09:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.340 15:09:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.340 15:09:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.340 15:09:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.340 15:09:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.340 15:09:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.340 15:09:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.340 15:09:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.340 15:09:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:21:58.340 15:09:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:21:58.340 15:09:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.340 15:09:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.340 15:09:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:58.340 15:09:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.340 15:09:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:58.340 15:09:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.340 15:09:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.340 15:09:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.340 15:09:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.340 15:09:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.340 15:09:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.340 15:09:13 -- paths/export.sh@5 -- # export PATH 00:21:58.340 15:09:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.340 15:09:13 -- nvmf/common.sh@47 -- # : 0 00:21:58.340 15:09:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:58.340 15:09:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:58.340 15:09:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.340 15:09:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.340 15:09:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.340 15:09:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:58.340 15:09:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:58.340 15:09:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:58.340 15:09:13 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.340 15:09:13 -- target/tls.sh@62 -- # nvmftestinit 00:21:58.340 15:09:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:58.340 15:09:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.340 15:09:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:58.340 15:09:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:58.340 15:09:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:58.340 15:09:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.340 15:09:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.340 15:09:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.340 15:09:13 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:58.340 15:09:13 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:58.340 15:09:13 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:58.340 15:09:13 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:58.340 15:09:13 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:58.340 15:09:13 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:58.340 15:09:13 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.340 15:09:13 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.340 15:09:13 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:58.340 15:09:13 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:58.340 15:09:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:58.340 15:09:13 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:58.340 15:09:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:58.340 15:09:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.340 15:09:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:58.341 15:09:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:58.341 15:09:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:58.341 15:09:13 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:58.341 15:09:13 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:58.341 15:09:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:58.341 Cannot find device "nvmf_tgt_br" 00:21:58.341 15:09:13 -- nvmf/common.sh@155 -- # true 00:21:58.341 15:09:13 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:58.341 Cannot find device "nvmf_tgt_br2" 00:21:58.341 15:09:13 -- nvmf/common.sh@156 -- # true 00:21:58.341 15:09:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:58.341 15:09:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:58.341 Cannot find device "nvmf_tgt_br" 00:21:58.341 15:09:14 -- nvmf/common.sh@158 -- # true 00:21:58.341 15:09:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:58.341 Cannot find device "nvmf_tgt_br2" 00:21:58.341 15:09:14 -- nvmf/common.sh@159 -- # true 00:21:58.341 15:09:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:58.601 15:09:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:58.601 15:09:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:58.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:58.601 15:09:14 -- nvmf/common.sh@162 -- # true 00:21:58.601 15:09:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:58.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:58.601 15:09:14 -- nvmf/common.sh@163 -- # true 00:21:58.601 15:09:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:58.601 15:09:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:58.601 15:09:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:58.601 15:09:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:58.601 15:09:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:58.601 15:09:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:58.601 15:09:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:58.601 15:09:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:58.601 15:09:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:58.601 15:09:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:58.601 15:09:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:58.601 15:09:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:58.601 15:09:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:58.601 15:09:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:58.601 15:09:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:58.601 15:09:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:58.601 15:09:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:58.601 15:09:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:58.601 15:09:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:58.601 15:09:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:58.601 15:09:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:58.860 15:09:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:58.860 15:09:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:58.860 15:09:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:58.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:21:58.860 00:21:58.860 --- 10.0.0.2 ping statistics --- 00:21:58.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.860 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:58.860 15:09:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:58.860 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:58.860 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:21:58.860 00:21:58.860 --- 10.0.0.3 ping statistics --- 00:21:58.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.860 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:21:58.860 15:09:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:58.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:21:58.860 00:21:58.860 --- 10.0.0.1 ping statistics --- 00:21:58.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.860 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:58.860 15:09:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.860 15:09:14 -- nvmf/common.sh@422 -- # return 0 00:21:58.860 15:09:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:58.860 15:09:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.860 15:09:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:58.860 15:09:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:58.860 15:09:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.860 15:09:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:58.860 15:09:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:58.860 15:09:14 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:58.860 15:09:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:58.860 15:09:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:58.860 15:09:14 -- common/autotest_common.sh@10 -- # set +x 00:21:58.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.860 15:09:14 -- nvmf/common.sh@470 -- # nvmfpid=76847 00:21:58.860 15:09:14 -- nvmf/common.sh@471 -- # waitforlisten 76847 00:21:58.860 15:09:14 -- common/autotest_common.sh@817 -- # '[' -z 76847 ']' 00:21:58.860 15:09:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.860 15:09:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:58.860 15:09:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.860 15:09:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:58.860 15:09:14 -- common/autotest_common.sh@10 -- # set +x 00:21:58.860 15:09:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:58.860 [2024-04-18 15:09:14.423243] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:21:58.860 [2024-04-18 15:09:14.423326] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.119 [2024-04-18 15:09:14.568634] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.119 [2024-04-18 15:09:14.660969] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.119 [2024-04-18 15:09:14.661027] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.119 [2024-04-18 15:09:14.661038] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.119 [2024-04-18 15:09:14.661048] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.119 [2024-04-18 15:09:14.661055] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.119 [2024-04-18 15:09:14.661097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.686 15:09:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:59.686 15:09:15 -- common/autotest_common.sh@850 -- # return 0 00:21:59.686 15:09:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:59.686 15:09:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:59.686 15:09:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.686 15:09:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.686 15:09:15 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:59.686 15:09:15 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:59.944 true 00:21:59.944 15:09:15 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:59.944 15:09:15 -- target/tls.sh@73 -- # jq -r .tls_version 00:22:00.202 15:09:15 -- target/tls.sh@73 -- # version=0 00:22:00.202 15:09:15 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:00.202 15:09:15 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:00.461 15:09:16 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:00.461 15:09:16 -- target/tls.sh@81 -- # jq -r .tls_version 00:22:00.720 15:09:16 -- target/tls.sh@81 -- # version=13 00:22:00.720 15:09:16 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:00.720 15:09:16 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:00.978 15:09:16 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:00.978 15:09:16 -- target/tls.sh@89 -- # jq -r .tls_version 00:22:00.978 15:09:16 -- target/tls.sh@89 -- # version=7 00:22:00.978 15:09:16 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:00.978 15:09:16 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:00.978 15:09:16 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:01.237 15:09:16 -- target/tls.sh@96 -- # ktls=false 00:22:01.237 15:09:16 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:01.237 15:09:16 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:01.496 15:09:17 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.496 15:09:17 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:01.755 15:09:17 -- target/tls.sh@104 -- # ktls=true 00:22:01.755 15:09:17 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:01.755 15:09:17 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:02.013 15:09:17 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:02.013 15:09:17 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:02.272 15:09:17 -- target/tls.sh@112 -- # ktls=false 00:22:02.272 15:09:17 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:02.272 15:09:17 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:02.272 15:09:17 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:02.272 15:09:17 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:02.272 15:09:17 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:22:02.272 15:09:17 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:22:02.272 15:09:17 -- nvmf/common.sh@693 -- # digest=1 00:22:02.272 15:09:17 -- nvmf/common.sh@694 -- # python - 00:22:02.272 15:09:17 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:02.272 15:09:17 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:02.272 15:09:17 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:02.272 15:09:17 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:02.272 15:09:17 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:22:02.272 15:09:17 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:22:02.272 15:09:17 -- nvmf/common.sh@693 -- # digest=1 00:22:02.272 15:09:17 -- nvmf/common.sh@694 -- # python - 00:22:02.272 15:09:17 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:02.272 15:09:17 -- target/tls.sh@121 -- # mktemp 00:22:02.272 15:09:17 -- target/tls.sh@121 -- # key_path=/tmp/tmp.MiXdgh5kIJ 00:22:02.272 15:09:17 -- target/tls.sh@122 -- # mktemp 00:22:02.272 15:09:17 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.dTvtOfWHm7 00:22:02.272 15:09:17 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:02.272 15:09:17 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:02.272 15:09:17 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.MiXdgh5kIJ 00:22:02.272 15:09:17 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.dTvtOfWHm7 00:22:02.272 15:09:17 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:02.531 15:09:18 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:22:02.790 15:09:18 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.MiXdgh5kIJ 00:22:02.790 15:09:18 -- target/tls.sh@49 -- # local key=/tmp/tmp.MiXdgh5kIJ 00:22:02.790 15:09:18 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:03.049 [2024-04-18 15:09:18.599631] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.049 15:09:18 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:03.308 15:09:18 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:03.566 [2024-04-18 15:09:19.031023] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:03.566 [2024-04-18 15:09:19.031251] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.566 15:09:19 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:03.566 malloc0 00:22:03.824 15:09:19 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:03.824 15:09:19 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MiXdgh5kIJ 00:22:04.083 [2024-04-18 15:09:19.675272] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:04.083 15:09:19 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.MiXdgh5kIJ 00:22:16.296 Initializing NVMe Controllers 00:22:16.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:16.296 Initialization complete. Launching workers. 00:22:16.297 ======================================================== 00:22:16.297 Latency(us) 00:22:16.297 Device Information : IOPS MiB/s Average min max 00:22:16.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13747.75 53.70 4655.98 1047.56 7727.15 00:22:16.297 ======================================================== 00:22:16.297 Total : 13747.75 53.70 4655.98 1047.56 7727.15 00:22:16.297 00:22:16.297 15:09:29 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MiXdgh5kIJ 00:22:16.297 15:09:29 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.297 15:09:29 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:16.297 15:09:29 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:16.297 15:09:29 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MiXdgh5kIJ' 00:22:16.297 15:09:29 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.297 15:09:29 -- target/tls.sh@28 -- # bdevperf_pid=77198 00:22:16.297 15:09:29 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.297 15:09:29 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.297 15:09:29 -- target/tls.sh@31 -- # waitforlisten 77198 /var/tmp/bdevperf.sock 00:22:16.297 15:09:29 -- common/autotest_common.sh@817 -- # '[' -z 77198 ']' 00:22:16.297 15:09:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.297 15:09:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:16.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.297 15:09:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.297 15:09:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:16.297 15:09:29 -- common/autotest_common.sh@10 -- # set +x 00:22:16.297 [2024-04-18 15:09:29.929257] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:22:16.297 [2024-04-18 15:09:29.929345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77198 ] 00:22:16.297 [2024-04-18 15:09:30.079570] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.297 [2024-04-18 15:09:30.172001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.297 15:09:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:16.297 15:09:30 -- common/autotest_common.sh@850 -- # return 0 00:22:16.297 15:09:30 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MiXdgh5kIJ 00:22:16.297 [2024-04-18 15:09:31.034390] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.297 [2024-04-18 15:09:31.034507] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:16.297 TLSTESTn1 00:22:16.297 15:09:31 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:16.297 Running I/O for 10 seconds... 00:22:26.272 00:22:26.272 Latency(us) 00:22:26.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.272 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:26.272 Verification LBA range: start 0x0 length 0x2000 00:22:26.272 TLSTESTn1 : 10.01 5146.10 20.10 0.00 0.00 24834.13 4658.58 29899.16 00:22:26.272 =================================================================================================================== 00:22:26.272 Total : 5146.10 20.10 0.00 0.00 24834.13 4658.58 29899.16 00:22:26.272 0 00:22:26.272 15:09:41 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:26.272 15:09:41 -- target/tls.sh@45 -- # killprocess 77198 00:22:26.272 15:09:41 -- common/autotest_common.sh@936 -- # '[' -z 77198 ']' 00:22:26.272 15:09:41 -- common/autotest_common.sh@940 -- # kill -0 77198 00:22:26.272 15:09:41 -- common/autotest_common.sh@941 -- # uname 00:22:26.272 15:09:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:26.272 15:09:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77198 00:22:26.272 15:09:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:26.272 killing process with pid 77198 00:22:26.272 15:09:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:26.272 15:09:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77198' 00:22:26.272 Received shutdown signal, test time was about 10.000000 seconds 00:22:26.272 00:22:26.272 Latency(us) 00:22:26.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.272 =================================================================================================================== 00:22:26.272 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:26.272 15:09:41 -- common/autotest_common.sh@955 -- # kill 77198 00:22:26.272 [2024-04-18 15:09:41.336469] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:26.272 15:09:41 -- common/autotest_common.sh@960 -- # wait 77198 00:22:26.272 15:09:41 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dTvtOfWHm7 00:22:26.272 15:09:41 -- common/autotest_common.sh@638 -- # local es=0 00:22:26.272 15:09:41 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dTvtOfWHm7 00:22:26.272 15:09:41 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:26.272 15:09:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:26.272 15:09:41 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:26.272 15:09:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:26.272 15:09:41 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dTvtOfWHm7 00:22:26.272 15:09:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:26.272 15:09:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:26.272 15:09:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:26.272 15:09:41 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dTvtOfWHm7' 00:22:26.272 15:09:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.272 15:09:41 -- target/tls.sh@28 -- # bdevperf_pid=77345 00:22:26.272 15:09:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.272 15:09:41 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:26.272 15:09:41 -- target/tls.sh@31 -- # waitforlisten 77345 /var/tmp/bdevperf.sock 00:22:26.272 15:09:41 -- common/autotest_common.sh@817 -- # '[' -z 77345 ']' 00:22:26.272 15:09:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.272 15:09:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:26.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.272 15:09:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.272 15:09:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:26.272 15:09:41 -- common/autotest_common.sh@10 -- # set +x 00:22:26.272 [2024-04-18 15:09:41.636285] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:22:26.272 [2024-04-18 15:09:41.636911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77345 ] 00:22:26.272 [2024-04-18 15:09:41.780384] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.272 [2024-04-18 15:09:41.869622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.838 15:09:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:26.838 15:09:42 -- common/autotest_common.sh@850 -- # return 0 00:22:26.838 15:09:42 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dTvtOfWHm7 00:22:27.096 [2024-04-18 15:09:42.683134] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.096 [2024-04-18 15:09:42.683248] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:27.096 [2024-04-18 15:09:42.695109] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:27.096 [2024-04-18 15:09:42.695591] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fe9c0 (107): Transport endpoint is not connected 00:22:27.096 [2024-04-18 15:09:42.696577] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fe9c0 (9): Bad file descriptor 00:22:27.096 [2024-04-18 15:09:42.697573] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:27.096 [2024-04-18 15:09:42.697597] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:27.096 [2024-04-18 15:09:42.697611] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:27.096 2024/04/18 15:09:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.dTvtOfWHm7 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:22:27.096 request: 00:22:27.096 { 00:22:27.096 "method": "bdev_nvme_attach_controller", 00:22:27.096 "params": { 00:22:27.096 "name": "TLSTEST", 00:22:27.096 "trtype": "tcp", 00:22:27.096 "traddr": "10.0.0.2", 00:22:27.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:27.096 "adrfam": "ipv4", 00:22:27.096 "trsvcid": "4420", 00:22:27.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.096 "psk": "/tmp/tmp.dTvtOfWHm7" 00:22:27.096 } 00:22:27.096 } 00:22:27.096 Got JSON-RPC error response 00:22:27.096 GoRPCClient: error on JSON-RPC call 00:22:27.096 15:09:42 -- target/tls.sh@36 -- # killprocess 77345 00:22:27.096 15:09:42 -- common/autotest_common.sh@936 -- # '[' -z 77345 ']' 00:22:27.096 15:09:42 -- common/autotest_common.sh@940 -- # kill -0 77345 00:22:27.096 15:09:42 -- common/autotest_common.sh@941 -- # uname 00:22:27.096 15:09:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:27.096 15:09:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77345 00:22:27.096 15:09:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:27.096 killing process with pid 77345 00:22:27.096 15:09:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:27.096 15:09:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77345' 00:22:27.096 Received shutdown signal, test time was about 10.000000 seconds 00:22:27.096 00:22:27.096 Latency(us) 00:22:27.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.096 =================================================================================================================== 00:22:27.096 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:27.096 15:09:42 -- common/autotest_common.sh@955 -- # kill 77345 00:22:27.096 [2024-04-18 15:09:42.764535] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:27.096 15:09:42 -- common/autotest_common.sh@960 -- # wait 77345 00:22:27.355 15:09:42 -- target/tls.sh@37 -- # return 1 00:22:27.355 15:09:42 -- common/autotest_common.sh@641 -- # es=1 00:22:27.355 15:09:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:27.355 15:09:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:27.355 15:09:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:27.355 15:09:42 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MiXdgh5kIJ 00:22:27.355 15:09:42 -- common/autotest_common.sh@638 -- # local es=0 00:22:27.355 15:09:42 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MiXdgh5kIJ 00:22:27.355 15:09:42 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:27.355 15:09:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:27.355 15:09:42 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:27.355 15:09:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:27.355 15:09:42 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MiXdgh5kIJ 00:22:27.355 15:09:42 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:27.355 15:09:42 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:27.355 15:09:42 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:27.355 15:09:42 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MiXdgh5kIJ' 00:22:27.355 15:09:42 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:27.355 15:09:42 -- target/tls.sh@28 -- # bdevperf_pid=77385 00:22:27.355 15:09:42 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:27.355 15:09:42 -- target/tls.sh@31 -- # waitforlisten 77385 /var/tmp/bdevperf.sock 00:22:27.355 15:09:42 -- common/autotest_common.sh@817 -- # '[' -z 77385 ']' 00:22:27.355 15:09:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.355 15:09:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:27.355 15:09:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.355 15:09:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:27.355 15:09:42 -- common/autotest_common.sh@10 -- # set +x 00:22:27.355 15:09:42 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:27.355 [2024-04-18 15:09:43.044211] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:22:27.355 [2024-04-18 15:09:43.044300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77385 ] 00:22:27.613 [2024-04-18 15:09:43.187216] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.613 [2024-04-18 15:09:43.281470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.547 15:09:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:28.547 15:09:43 -- common/autotest_common.sh@850 -- # return 0 00:22:28.547 15:09:43 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.MiXdgh5kIJ 00:22:28.547 [2024-04-18 15:09:44.085042] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:28.547 [2024-04-18 15:09:44.085177] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:28.547 [2024-04-18 15:09:44.092846] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:28.547 [2024-04-18 15:09:44.092888] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:28.547 [2024-04-18 15:09:44.092940] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:28.547 [2024-04-18 15:09:44.093643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9309c0 (107): Transport endpoint is not connected 00:22:28.547 [2024-04-18 15:09:44.094628] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9309c0 (9): Bad file descriptor 00:22:28.547 [2024-04-18 15:09:44.095624] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:28.547 [2024-04-18 15:09:44.095651] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:28.547 [2024-04-18 15:09:44.095665] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:28.547 2024/04/18 15:09:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.MiXdgh5kIJ subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:22:28.547 request: 00:22:28.547 { 00:22:28.547 "method": "bdev_nvme_attach_controller", 00:22:28.547 "params": { 00:22:28.547 "name": "TLSTEST", 00:22:28.547 "trtype": "tcp", 00:22:28.547 "traddr": "10.0.0.2", 00:22:28.547 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:28.547 "adrfam": "ipv4", 00:22:28.547 "trsvcid": "4420", 00:22:28.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.547 "psk": "/tmp/tmp.MiXdgh5kIJ" 00:22:28.547 } 00:22:28.547 } 00:22:28.547 Got JSON-RPC error response 00:22:28.547 GoRPCClient: error on JSON-RPC call 00:22:28.547 15:09:44 -- target/tls.sh@36 -- # killprocess 77385 00:22:28.547 15:09:44 -- common/autotest_common.sh@936 -- # '[' -z 77385 ']' 00:22:28.547 15:09:44 -- common/autotest_common.sh@940 -- # kill -0 77385 00:22:28.547 15:09:44 -- common/autotest_common.sh@941 -- # uname 00:22:28.547 15:09:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:28.547 15:09:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77385 00:22:28.547 15:09:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:28.547 15:09:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:28.547 15:09:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77385' 00:22:28.547 killing process with pid 77385 00:22:28.547 15:09:44 -- common/autotest_common.sh@955 -- # kill 77385 00:22:28.547 Received shutdown signal, test time was about 10.000000 seconds 00:22:28.547 00:22:28.547 Latency(us) 00:22:28.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.547 =================================================================================================================== 00:22:28.547 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:28.547 [2024-04-18 15:09:44.158121] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:28.547 15:09:44 -- common/autotest_common.sh@960 -- # wait 77385 00:22:28.806 15:09:44 -- target/tls.sh@37 -- # return 1 00:22:28.806 15:09:44 -- common/autotest_common.sh@641 -- # es=1 00:22:28.806 15:09:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:28.806 15:09:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:28.806 15:09:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:28.806 15:09:44 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MiXdgh5kIJ 00:22:28.806 15:09:44 -- common/autotest_common.sh@638 -- # local es=0 00:22:28.806 15:09:44 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MiXdgh5kIJ 00:22:28.806 15:09:44 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:28.806 15:09:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:28.806 15:09:44 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:28.806 15:09:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:28.806 15:09:44 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MiXdgh5kIJ 00:22:28.806 15:09:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:28.806 15:09:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:28.806 15:09:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:28.806 15:09:44 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MiXdgh5kIJ' 00:22:28.806 15:09:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.806 15:09:44 -- target/tls.sh@28 -- # bdevperf_pid=77435 00:22:28.806 15:09:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:28.806 15:09:44 -- target/tls.sh@31 -- # waitforlisten 77435 /var/tmp/bdevperf.sock 00:22:28.806 15:09:44 -- common/autotest_common.sh@817 -- # '[' -z 77435 ']' 00:22:28.806 15:09:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.806 15:09:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:28.806 15:09:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.806 15:09:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:28.806 15:09:44 -- common/autotest_common.sh@10 -- # set +x 00:22:28.806 15:09:44 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:28.806 [2024-04-18 15:09:44.446253] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:22:28.806 [2024-04-18 15:09:44.446337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77435 ] 00:22:29.064 [2024-04-18 15:09:44.589883] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.064 [2024-04-18 15:09:44.677903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.631 15:09:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:29.631 15:09:45 -- common/autotest_common.sh@850 -- # return 0 00:22:29.631 15:09:45 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MiXdgh5kIJ 00:22:29.890 [2024-04-18 15:09:45.469183] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:29.890 [2024-04-18 15:09:45.469321] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:29.890 [2024-04-18 15:09:45.480765] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:29.890 [2024-04-18 15:09:45.480820] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:29.890 [2024-04-18 15:09:45.480873] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:29.890 [2024-04-18 15:09:45.481111] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13689c0 (107): Transport endpoint is not connected 00:22:29.890 [2024-04-18 15:09:45.482088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13689c0 (9): Bad file descriptor 00:22:29.890 [2024-04-18 15:09:45.483083] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:29.890 [2024-04-18 15:09:45.483109] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:29.890 [2024-04-18 15:09:45.483129] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:29.890 2024/04/18 15:09:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.MiXdgh5kIJ subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:22:29.890 request: 00:22:29.890 { 00:22:29.890 "method": "bdev_nvme_attach_controller", 00:22:29.890 "params": { 00:22:29.890 "name": "TLSTEST", 00:22:29.890 "trtype": "tcp", 00:22:29.890 "traddr": "10.0.0.2", 00:22:29.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.890 "adrfam": "ipv4", 00:22:29.890 "trsvcid": "4420", 00:22:29.890 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:29.890 "psk": "/tmp/tmp.MiXdgh5kIJ" 00:22:29.890 } 00:22:29.890 } 00:22:29.890 Got JSON-RPC error response 00:22:29.890 GoRPCClient: error on JSON-RPC call 00:22:29.890 15:09:45 -- target/tls.sh@36 -- # killprocess 77435 00:22:29.890 15:09:45 -- common/autotest_common.sh@936 -- # '[' -z 77435 ']' 00:22:29.890 15:09:45 -- common/autotest_common.sh@940 -- # kill -0 77435 00:22:29.890 15:09:45 -- common/autotest_common.sh@941 -- # uname 00:22:29.890 15:09:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:29.890 15:09:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77435 00:22:29.890 15:09:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:29.890 15:09:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:29.890 killing process with pid 77435 00:22:29.890 15:09:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77435' 00:22:29.890 Received shutdown signal, test time was about 10.000000 seconds 00:22:29.890 00:22:29.890 Latency(us) 00:22:29.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.890 =================================================================================================================== 00:22:29.890 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:29.890 15:09:45 -- common/autotest_common.sh@955 -- # kill 77435 00:22:29.890 [2024-04-18 15:09:45.549548] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:29.890 15:09:45 -- common/autotest_common.sh@960 -- # wait 77435 00:22:30.149 15:09:45 -- target/tls.sh@37 -- # return 1 00:22:30.149 15:09:45 -- common/autotest_common.sh@641 -- # es=1 00:22:30.149 15:09:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:30.149 15:09:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:30.149 15:09:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:30.149 15:09:45 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:30.149 15:09:45 -- common/autotest_common.sh@638 -- # local es=0 00:22:30.149 15:09:45 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:30.149 15:09:45 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:30.149 15:09:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:30.149 15:09:45 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:30.149 15:09:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:30.149 15:09:45 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:30.149 15:09:45 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:30.149 15:09:45 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:30.149 15:09:45 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:30.149 15:09:45 -- target/tls.sh@23 -- # psk= 00:22:30.149 15:09:45 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:30.149 15:09:45 -- target/tls.sh@28 -- # bdevperf_pid=77477 00:22:30.149 15:09:45 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:30.149 15:09:45 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:30.149 15:09:45 -- target/tls.sh@31 -- # waitforlisten 77477 /var/tmp/bdevperf.sock 00:22:30.149 15:09:45 -- common/autotest_common.sh@817 -- # '[' -z 77477 ']' 00:22:30.149 15:09:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.149 15:09:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:30.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.149 15:09:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.149 15:09:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:30.149 15:09:45 -- common/autotest_common.sh@10 -- # set +x 00:22:30.149 [2024-04-18 15:09:45.832978] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:22:30.149 [2024-04-18 15:09:45.833083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77477 ] 00:22:30.407 [2024-04-18 15:09:45.976502] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.407 [2024-04-18 15:09:46.074298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.342 15:09:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:31.342 15:09:46 -- common/autotest_common.sh@850 -- # return 0 00:22:31.342 15:09:46 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:31.342 [2024-04-18 15:09:46.898687] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:31.342 [2024-04-18 15:09:46.900561] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1fdc0 (9): Bad file descriptor 00:22:31.342 [2024-04-18 15:09:46.901566] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:31.342 [2024-04-18 15:09:46.901602] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:31.342 [2024-04-18 15:09:46.901625] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:31.342 2024/04/18 15:09:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:22:31.342 request: 00:22:31.342 { 00:22:31.342 "method": "bdev_nvme_attach_controller", 00:22:31.342 "params": { 00:22:31.342 "name": "TLSTEST", 00:22:31.342 "trtype": "tcp", 00:22:31.342 "traddr": "10.0.0.2", 00:22:31.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.342 "adrfam": "ipv4", 00:22:31.342 "trsvcid": "4420", 00:22:31.342 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:22:31.342 } 00:22:31.342 } 00:22:31.342 Got JSON-RPC error response 00:22:31.342 GoRPCClient: error on JSON-RPC call 00:22:31.342 15:09:46 -- target/tls.sh@36 -- # killprocess 77477 00:22:31.342 15:09:46 -- common/autotest_common.sh@936 -- # '[' -z 77477 ']' 00:22:31.342 15:09:46 -- common/autotest_common.sh@940 -- # kill -0 77477 00:22:31.342 15:09:46 -- common/autotest_common.sh@941 -- # uname 00:22:31.342 15:09:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:31.342 15:09:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77477 00:22:31.342 15:09:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:31.342 15:09:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:31.342 15:09:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77477' 00:22:31.342 killing process with pid 77477 00:22:31.342 Received shutdown signal, test time was about 10.000000 seconds 00:22:31.342 00:22:31.342 Latency(us) 00:22:31.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.342 =================================================================================================================== 00:22:31.342 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:31.342 15:09:46 -- common/autotest_common.sh@955 -- # kill 77477 00:22:31.342 15:09:46 -- common/autotest_common.sh@960 -- # wait 77477 00:22:31.601 15:09:47 -- target/tls.sh@37 -- # return 1 00:22:31.601 15:09:47 -- common/autotest_common.sh@641 -- # es=1 00:22:31.601 15:09:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:31.601 15:09:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:31.601 15:09:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:31.601 15:09:47 -- target/tls.sh@158 -- # killprocess 76847 00:22:31.601 15:09:47 -- common/autotest_common.sh@936 -- # '[' -z 76847 ']' 00:22:31.601 15:09:47 -- common/autotest_common.sh@940 -- # kill -0 76847 00:22:31.601 15:09:47 -- common/autotest_common.sh@941 -- # uname 00:22:31.601 15:09:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:31.601 15:09:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76847 00:22:31.601 15:09:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:31.601 killing process with pid 76847 00:22:31.601 15:09:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:31.601 15:09:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76847' 00:22:31.601 15:09:47 -- common/autotest_common.sh@955 -- # kill 76847 00:22:31.601 [2024-04-18 15:09:47.251499] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:31.601 15:09:47 -- common/autotest_common.sh@960 -- # wait 76847 00:22:31.859 15:09:47 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:31.860 15:09:47 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:31.860 15:09:47 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:31.860 15:09:47 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:22:31.860 15:09:47 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:31.860 15:09:47 -- nvmf/common.sh@693 -- # digest=2 00:22:31.860 15:09:47 -- nvmf/common.sh@694 -- # python - 00:22:31.860 15:09:47 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:31.860 15:09:47 -- target/tls.sh@160 -- # mktemp 00:22:31.860 15:09:47 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.qFD9mpxRsg 00:22:31.860 15:09:47 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:31.860 15:09:47 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.qFD9mpxRsg 00:22:31.860 15:09:47 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:31.860 15:09:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:31.860 15:09:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:31.860 15:09:47 -- common/autotest_common.sh@10 -- # set +x 00:22:31.860 15:09:47 -- nvmf/common.sh@470 -- # nvmfpid=77533 00:22:31.860 15:09:47 -- nvmf/common.sh@471 -- # waitforlisten 77533 00:22:31.860 15:09:47 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:31.860 15:09:47 -- common/autotest_common.sh@817 -- # '[' -z 77533 ']' 00:22:31.860 15:09:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.860 15:09:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:31.860 15:09:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.860 15:09:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:31.860 15:09:47 -- common/autotest_common.sh@10 -- # set +x 00:22:32.118 [2024-04-18 15:09:47.604809] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:22:32.119 [2024-04-18 15:09:47.604938] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.119 [2024-04-18 15:09:47.748036] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.377 [2024-04-18 15:09:47.848241] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.377 [2024-04-18 15:09:47.848299] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.377 [2024-04-18 15:09:47.848309] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.377 [2024-04-18 15:09:47.848319] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.377 [2024-04-18 15:09:47.848326] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.377 [2024-04-18 15:09:47.848366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.987 15:09:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:32.987 15:09:48 -- common/autotest_common.sh@850 -- # return 0 00:22:32.987 15:09:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:32.987 15:09:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:32.987 15:09:48 -- common/autotest_common.sh@10 -- # set +x 00:22:32.987 15:09:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.987 15:09:48 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.qFD9mpxRsg 00:22:32.987 15:09:48 -- target/tls.sh@49 -- # local key=/tmp/tmp.qFD9mpxRsg 00:22:32.987 15:09:48 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:33.248 [2024-04-18 15:09:48.755238] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.248 15:09:48 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:33.506 15:09:48 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:33.506 [2024-04-18 15:09:49.182605] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:33.506 [2024-04-18 15:09:49.182845] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.506 15:09:49 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:33.765 malloc0 00:22:33.765 15:09:49 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:34.023 15:09:49 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qFD9mpxRsg 00:22:34.281 [2024-04-18 15:09:49.802668] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:34.282 15:09:49 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qFD9mpxRsg 00:22:34.282 15:09:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:34.282 15:09:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:34.282 15:09:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:34.282 15:09:49 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qFD9mpxRsg' 00:22:34.282 15:09:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:34.282 15:09:49 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:34.282 15:09:49 -- target/tls.sh@28 -- # bdevperf_pid=77630 00:22:34.282 15:09:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:34.282 15:09:49 -- target/tls.sh@31 -- # waitforlisten 77630 /var/tmp/bdevperf.sock 00:22:34.282 15:09:49 -- common/autotest_common.sh@817 -- # '[' -z 77630 ']' 00:22:34.282 15:09:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.282 15:09:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:34.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.282 15:09:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.282 15:09:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:34.282 15:09:49 -- common/autotest_common.sh@10 -- # set +x 00:22:34.282 [2024-04-18 15:09:49.865644] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:22:34.282 [2024-04-18 15:09:49.865731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77630 ] 00:22:34.541 [2024-04-18 15:09:50.006158] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.541 [2024-04-18 15:09:50.103640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.109 15:09:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:35.109 15:09:50 -- common/autotest_common.sh@850 -- # return 0 00:22:35.109 15:09:50 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qFD9mpxRsg 00:22:35.367 [2024-04-18 15:09:50.933692] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:35.367 [2024-04-18 15:09:50.933810] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:35.367 TLSTESTn1 00:22:35.367 15:09:51 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:35.626 Running I/O for 10 seconds... 00:22:45.762 00:22:45.762 Latency(us) 00:22:45.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.762 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:45.762 Verification LBA range: start 0x0 length 0x2000 00:22:45.762 TLSTESTn1 : 10.01 5321.58 20.79 0.00 0.00 24015.60 5079.70 19581.84 00:22:45.762 =================================================================================================================== 00:22:45.762 Total : 5321.58 20.79 0.00 0.00 24015.60 5079.70 19581.84 00:22:45.762 0 00:22:45.762 15:10:01 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:45.762 15:10:01 -- target/tls.sh@45 -- # killprocess 77630 00:22:45.762 15:10:01 -- common/autotest_common.sh@936 -- # '[' -z 77630 ']' 00:22:45.762 15:10:01 -- common/autotest_common.sh@940 -- # kill -0 77630 00:22:45.762 15:10:01 -- common/autotest_common.sh@941 -- # uname 00:22:45.762 15:10:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:45.762 15:10:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77630 00:22:45.762 15:10:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:45.762 15:10:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:45.762 killing process with pid 77630 00:22:45.762 15:10:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77630' 00:22:45.762 Received shutdown signal, test time was about 10.000000 seconds 00:22:45.762 00:22:45.762 Latency(us) 00:22:45.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.762 =================================================================================================================== 00:22:45.762 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:45.762 15:10:01 -- common/autotest_common.sh@955 -- # kill 77630 00:22:45.762 [2024-04-18 15:10:01.183409] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:45.762 15:10:01 -- common/autotest_common.sh@960 -- # wait 77630 00:22:45.762 15:10:01 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.qFD9mpxRsg 00:22:45.762 15:10:01 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qFD9mpxRsg 00:22:45.762 15:10:01 -- common/autotest_common.sh@638 -- # local es=0 00:22:45.762 15:10:01 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qFD9mpxRsg 00:22:45.762 15:10:01 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:45.763 15:10:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:45.763 15:10:01 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:45.763 15:10:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:45.763 15:10:01 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qFD9mpxRsg 00:22:45.763 15:10:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:45.763 15:10:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:45.763 15:10:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:45.763 15:10:01 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qFD9mpxRsg' 00:22:45.763 15:10:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.763 15:10:01 -- target/tls.sh@28 -- # bdevperf_pid=77782 00:22:45.763 15:10:01 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:45.763 15:10:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.763 15:10:01 -- target/tls.sh@31 -- # waitforlisten 77782 /var/tmp/bdevperf.sock 00:22:45.763 15:10:01 -- common/autotest_common.sh@817 -- # '[' -z 77782 ']' 00:22:45.763 15:10:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.763 15:10:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:45.763 15:10:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.763 15:10:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:45.763 15:10:01 -- common/autotest_common.sh@10 -- # set +x 00:22:46.022 [2024-04-18 15:10:01.487706] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:22:46.022 [2024-04-18 15:10:01.488050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77782 ] 00:22:46.022 [2024-04-18 15:10:01.632774] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.280 [2024-04-18 15:10:01.727809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.849 15:10:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:46.850 15:10:02 -- common/autotest_common.sh@850 -- # return 0 00:22:46.850 15:10:02 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qFD9mpxRsg 00:22:46.850 [2024-04-18 15:10:02.553657] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.850 [2024-04-18 15:10:02.553734] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:46.850 [2024-04-18 15:10:02.553743] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.qFD9mpxRsg 00:22:47.117 2024/04/18 15:10:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.qFD9mpxRsg subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:22:47.117 request: 00:22:47.117 { 00:22:47.117 "method": "bdev_nvme_attach_controller", 00:22:47.117 "params": { 00:22:47.117 "name": "TLSTEST", 00:22:47.117 "trtype": "tcp", 00:22:47.117 "traddr": "10.0.0.2", 00:22:47.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:47.117 "adrfam": "ipv4", 00:22:47.117 "trsvcid": "4420", 00:22:47.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.117 "psk": "/tmp/tmp.qFD9mpxRsg" 00:22:47.117 } 00:22:47.117 } 00:22:47.117 Got JSON-RPC error response 00:22:47.117 GoRPCClient: error on JSON-RPC call 00:22:47.117 15:10:02 -- target/tls.sh@36 -- # killprocess 77782 00:22:47.117 15:10:02 -- common/autotest_common.sh@936 -- # '[' -z 77782 ']' 00:22:47.117 15:10:02 -- common/autotest_common.sh@940 -- # kill -0 77782 00:22:47.117 15:10:02 -- common/autotest_common.sh@941 -- # uname 00:22:47.117 15:10:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:47.117 15:10:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77782 00:22:47.117 killing process with pid 77782 00:22:47.117 Received shutdown signal, test time was about 10.000000 seconds 00:22:47.117 00:22:47.117 Latency(us) 00:22:47.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.117 =================================================================================================================== 00:22:47.117 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:47.117 15:10:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:47.117 15:10:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:47.117 15:10:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77782' 00:22:47.117 15:10:02 -- common/autotest_common.sh@955 -- # kill 77782 00:22:47.117 15:10:02 -- common/autotest_common.sh@960 -- # wait 77782 00:22:47.377 15:10:02 -- target/tls.sh@37 -- # return 1 00:22:47.377 15:10:02 -- common/autotest_common.sh@641 -- # es=1 00:22:47.377 15:10:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:47.377 15:10:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:47.377 15:10:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:47.377 15:10:02 -- target/tls.sh@174 -- # killprocess 77533 00:22:47.377 15:10:02 -- common/autotest_common.sh@936 -- # '[' -z 77533 ']' 00:22:47.377 15:10:02 -- common/autotest_common.sh@940 -- # kill -0 77533 00:22:47.377 15:10:02 -- common/autotest_common.sh@941 -- # uname 00:22:47.377 15:10:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:47.377 15:10:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77533 00:22:47.377 killing process with pid 77533 00:22:47.377 15:10:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:47.377 15:10:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:47.377 15:10:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77533' 00:22:47.377 15:10:02 -- common/autotest_common.sh@955 -- # kill 77533 00:22:47.377 [2024-04-18 15:10:02.873122] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:47.377 15:10:02 -- common/autotest_common.sh@960 -- # wait 77533 00:22:47.635 15:10:03 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:47.635 15:10:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:47.635 15:10:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:47.635 15:10:03 -- common/autotest_common.sh@10 -- # set +x 00:22:47.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.635 15:10:03 -- nvmf/common.sh@470 -- # nvmfpid=77833 00:22:47.635 15:10:03 -- nvmf/common.sh@471 -- # waitforlisten 77833 00:22:47.635 15:10:03 -- common/autotest_common.sh@817 -- # '[' -z 77833 ']' 00:22:47.635 15:10:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.635 15:10:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:47.635 15:10:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.635 15:10:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:47.635 15:10:03 -- common/autotest_common.sh@10 -- # set +x 00:22:47.635 15:10:03 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:47.635 [2024-04-18 15:10:03.164279] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:22:47.635 [2024-04-18 15:10:03.164370] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.635 [2024-04-18 15:10:03.302282] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.894 [2024-04-18 15:10:03.392928] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.894 [2024-04-18 15:10:03.392991] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.894 [2024-04-18 15:10:03.393002] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.894 [2024-04-18 15:10:03.393012] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.894 [2024-04-18 15:10:03.393019] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.894 [2024-04-18 15:10:03.393066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.463 15:10:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:48.463 15:10:04 -- common/autotest_common.sh@850 -- # return 0 00:22:48.463 15:10:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:48.463 15:10:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:48.463 15:10:04 -- common/autotest_common.sh@10 -- # set +x 00:22:48.463 15:10:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.463 15:10:04 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.qFD9mpxRsg 00:22:48.463 15:10:04 -- common/autotest_common.sh@638 -- # local es=0 00:22:48.463 15:10:04 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.qFD9mpxRsg 00:22:48.463 15:10:04 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:22:48.463 15:10:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:48.463 15:10:04 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:22:48.463 15:10:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:48.463 15:10:04 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.qFD9mpxRsg 00:22:48.463 15:10:04 -- target/tls.sh@49 -- # local key=/tmp/tmp.qFD9mpxRsg 00:22:48.463 15:10:04 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:48.723 [2024-04-18 15:10:04.260821] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.723 15:10:04 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:48.983 15:10:04 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:48.983 [2024-04-18 15:10:04.688177] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:48.983 [2024-04-18 15:10:04.688411] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.242 15:10:04 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:49.242 malloc0 00:22:49.242 15:10:04 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:49.501 15:10:05 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qFD9mpxRsg 00:22:49.761 [2024-04-18 15:10:05.316218] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:49.761 [2024-04-18 15:10:05.316270] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:49.761 [2024-04-18 15:10:05.316295] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:22:49.761 2024/04/18 15:10:05 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.qFD9mpxRsg], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:22:49.761 request: 00:22:49.761 { 00:22:49.761 "method": "nvmf_subsystem_add_host", 00:22:49.761 "params": { 00:22:49.761 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.761 "host": "nqn.2016-06.io.spdk:host1", 00:22:49.761 "psk": "/tmp/tmp.qFD9mpxRsg" 00:22:49.761 } 00:22:49.761 } 00:22:49.761 Got JSON-RPC error response 00:22:49.761 GoRPCClient: error on JSON-RPC call 00:22:49.761 15:10:05 -- common/autotest_common.sh@641 -- # es=1 00:22:49.761 15:10:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:49.761 15:10:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:49.761 15:10:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:49.761 15:10:05 -- target/tls.sh@180 -- # killprocess 77833 00:22:49.761 15:10:05 -- common/autotest_common.sh@936 -- # '[' -z 77833 ']' 00:22:49.761 15:10:05 -- common/autotest_common.sh@940 -- # kill -0 77833 00:22:49.761 15:10:05 -- common/autotest_common.sh@941 -- # uname 00:22:49.761 15:10:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:49.761 15:10:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77833 00:22:49.761 15:10:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:49.761 killing process with pid 77833 00:22:49.761 15:10:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:49.761 15:10:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77833' 00:22:49.761 15:10:05 -- common/autotest_common.sh@955 -- # kill 77833 00:22:49.761 15:10:05 -- common/autotest_common.sh@960 -- # wait 77833 00:22:50.020 15:10:05 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.qFD9mpxRsg 00:22:50.021 15:10:05 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:50.021 15:10:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:50.021 15:10:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:50.021 15:10:05 -- common/autotest_common.sh@10 -- # set +x 00:22:50.021 15:10:05 -- nvmf/common.sh@470 -- # nvmfpid=77940 00:22:50.021 15:10:05 -- nvmf/common.sh@471 -- # waitforlisten 77940 00:22:50.021 15:10:05 -- common/autotest_common.sh@817 -- # '[' -z 77940 ']' 00:22:50.021 15:10:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.021 15:10:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:50.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.021 15:10:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.021 15:10:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:50.021 15:10:05 -- common/autotest_common.sh@10 -- # set +x 00:22:50.021 15:10:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:50.021 [2024-04-18 15:10:05.680904] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:22:50.021 [2024-04-18 15:10:05.680978] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.281 [2024-04-18 15:10:05.823867] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.281 [2024-04-18 15:10:05.902285] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.281 [2024-04-18 15:10:05.902349] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.281 [2024-04-18 15:10:05.902360] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.281 [2024-04-18 15:10:05.902368] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.281 [2024-04-18 15:10:05.902376] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.281 [2024-04-18 15:10:05.902414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.849 15:10:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:50.849 15:10:06 -- common/autotest_common.sh@850 -- # return 0 00:22:50.849 15:10:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:50.849 15:10:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:50.849 15:10:06 -- common/autotest_common.sh@10 -- # set +x 00:22:51.109 15:10:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.109 15:10:06 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.qFD9mpxRsg 00:22:51.109 15:10:06 -- target/tls.sh@49 -- # local key=/tmp/tmp.qFD9mpxRsg 00:22:51.109 15:10:06 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:51.109 [2024-04-18 15:10:06.792404] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.367 15:10:06 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:51.367 15:10:07 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:51.645 [2024-04-18 15:10:07.219746] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:51.645 [2024-04-18 15:10:07.219989] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.645 15:10:07 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:51.905 malloc0 00:22:51.905 15:10:07 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:52.166 15:10:07 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qFD9mpxRsg 00:22:52.166 [2024-04-18 15:10:07.871650] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:52.428 15:10:07 -- target/tls.sh@188 -- # bdevperf_pid=78042 00:22:52.428 15:10:07 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.428 15:10:07 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.428 15:10:07 -- target/tls.sh@191 -- # waitforlisten 78042 /var/tmp/bdevperf.sock 00:22:52.428 15:10:07 -- common/autotest_common.sh@817 -- # '[' -z 78042 ']' 00:22:52.428 15:10:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.428 15:10:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:52.428 15:10:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.428 15:10:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:52.428 15:10:07 -- common/autotest_common.sh@10 -- # set +x 00:22:52.428 [2024-04-18 15:10:07.940336] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:22:52.428 [2024-04-18 15:10:07.940416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78042 ] 00:22:52.428 [2024-04-18 15:10:08.085146] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.689 [2024-04-18 15:10:08.177988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.258 15:10:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:53.258 15:10:08 -- common/autotest_common.sh@850 -- # return 0 00:22:53.258 15:10:08 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qFD9mpxRsg 00:22:53.517 [2024-04-18 15:10:09.007716] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.517 [2024-04-18 15:10:09.007838] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:53.517 TLSTESTn1 00:22:53.517 15:10:09 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:53.776 15:10:09 -- target/tls.sh@196 -- # tgtconf='{ 00:22:53.776 "subsystems": [ 00:22:53.776 { 00:22:53.776 "subsystem": "keyring", 00:22:53.776 "config": [] 00:22:53.776 }, 00:22:53.776 { 00:22:53.776 "subsystem": "iobuf", 00:22:53.776 "config": [ 00:22:53.776 { 00:22:53.776 "method": "iobuf_set_options", 00:22:53.776 "params": { 00:22:53.776 "large_bufsize": 135168, 00:22:53.776 "large_pool_count": 1024, 00:22:53.776 "small_bufsize": 8192, 00:22:53.776 "small_pool_count": 8192 00:22:53.776 } 00:22:53.776 } 00:22:53.776 ] 00:22:53.776 }, 00:22:53.776 { 00:22:53.776 "subsystem": "sock", 00:22:53.776 "config": [ 00:22:53.776 { 00:22:53.777 "method": "sock_impl_set_options", 00:22:53.777 "params": { 00:22:53.777 "enable_ktls": false, 00:22:53.777 "enable_placement_id": 0, 00:22:53.777 "enable_quickack": false, 00:22:53.777 "enable_recv_pipe": true, 00:22:53.777 "enable_zerocopy_send_client": false, 00:22:53.777 "enable_zerocopy_send_server": true, 00:22:53.777 "impl_name": "posix", 00:22:53.777 "recv_buf_size": 2097152, 00:22:53.777 "send_buf_size": 2097152, 00:22:53.777 "tls_version": 0, 00:22:53.777 "zerocopy_threshold": 0 00:22:53.777 } 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "method": "sock_impl_set_options", 00:22:53.777 "params": { 00:22:53.777 "enable_ktls": false, 00:22:53.777 "enable_placement_id": 0, 00:22:53.777 "enable_quickack": false, 00:22:53.777 "enable_recv_pipe": true, 00:22:53.777 "enable_zerocopy_send_client": false, 00:22:53.777 "enable_zerocopy_send_server": true, 00:22:53.777 "impl_name": "ssl", 00:22:53.777 "recv_buf_size": 4096, 00:22:53.777 "send_buf_size": 4096, 00:22:53.777 "tls_version": 0, 00:22:53.777 "zerocopy_threshold": 0 00:22:53.777 } 00:22:53.777 } 00:22:53.777 ] 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "subsystem": "vmd", 00:22:53.777 "config": [] 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "subsystem": "accel", 00:22:53.777 "config": [ 00:22:53.777 { 00:22:53.777 "method": "accel_set_options", 00:22:53.777 "params": { 00:22:53.777 "buf_count": 2048, 00:22:53.777 "large_cache_size": 16, 00:22:53.777 "sequence_count": 2048, 00:22:53.777 "small_cache_size": 128, 00:22:53.777 "task_count": 2048 00:22:53.777 } 00:22:53.777 } 00:22:53.777 ] 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "subsystem": "bdev", 00:22:53.777 "config": [ 00:22:53.777 { 00:22:53.777 "method": "bdev_set_options", 00:22:53.777 "params": { 00:22:53.777 "bdev_auto_examine": true, 00:22:53.777 "bdev_io_cache_size": 256, 00:22:53.777 "bdev_io_pool_size": 65535, 00:22:53.777 "iobuf_large_cache_size": 16, 00:22:53.777 "iobuf_small_cache_size": 128 00:22:53.777 } 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "method": "bdev_raid_set_options", 00:22:53.777 "params": { 00:22:53.777 "process_window_size_kb": 1024 00:22:53.777 } 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "method": "bdev_iscsi_set_options", 00:22:53.777 "params": { 00:22:53.777 "timeout_sec": 30 00:22:53.777 } 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "method": "bdev_nvme_set_options", 00:22:53.777 "params": { 00:22:53.777 "action_on_timeout": "none", 00:22:53.777 "allow_accel_sequence": false, 00:22:53.777 "arbitration_burst": 0, 00:22:53.777 "bdev_retry_count": 3, 00:22:53.777 "ctrlr_loss_timeout_sec": 0, 00:22:53.777 "delay_cmd_submit": true, 00:22:53.777 "dhchap_dhgroups": [ 00:22:53.777 "null", 00:22:53.777 "ffdhe2048", 00:22:53.777 "ffdhe3072", 00:22:53.777 "ffdhe4096", 00:22:53.777 "ffdhe6144", 00:22:53.777 "ffdhe8192" 00:22:53.777 ], 00:22:53.777 "dhchap_digests": [ 00:22:53.777 "sha256", 00:22:53.777 "sha384", 00:22:53.777 "sha512" 00:22:53.777 ], 00:22:53.777 "disable_auto_failback": false, 00:22:53.777 "fast_io_fail_timeout_sec": 0, 00:22:53.777 "generate_uuids": false, 00:22:53.777 "high_priority_weight": 0, 00:22:53.777 "io_path_stat": false, 00:22:53.777 "io_queue_requests": 0, 00:22:53.777 "keep_alive_timeout_ms": 10000, 00:22:53.777 "low_priority_weight": 0, 00:22:53.777 "medium_priority_weight": 0, 00:22:53.777 "nvme_adminq_poll_period_us": 10000, 00:22:53.777 "nvme_error_stat": false, 00:22:53.777 "nvme_ioq_poll_period_us": 0, 00:22:53.777 "rdma_cm_event_timeout_ms": 0, 00:22:53.777 "rdma_max_cq_size": 0, 00:22:53.777 "rdma_srq_size": 0, 00:22:53.777 "reconnect_delay_sec": 0, 00:22:53.777 "timeout_admin_us": 0, 00:22:53.777 "timeout_us": 0, 00:22:53.777 "transport_ack_timeout": 0, 00:22:53.777 "transport_retry_count": 4, 00:22:53.777 "transport_tos": 0 00:22:53.777 } 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "method": "bdev_nvme_set_hotplug", 00:22:53.777 "params": { 00:22:53.777 "enable": false, 00:22:53.777 "period_us": 100000 00:22:53.777 } 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "method": "bdev_malloc_create", 00:22:53.777 "params": { 00:22:53.777 "block_size": 4096, 00:22:53.777 "name": "malloc0", 00:22:53.777 "num_blocks": 8192, 00:22:53.777 "optimal_io_boundary": 0, 00:22:53.777 "physical_block_size": 4096, 00:22:53.777 "uuid": "fbe4ca79-2124-44cf-af7c-374ae2a154c6" 00:22:53.777 } 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "method": "bdev_wait_for_examine" 00:22:53.777 } 00:22:53.777 ] 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "subsystem": "nbd", 00:22:53.777 "config": [] 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "subsystem": "scheduler", 00:22:53.777 "config": [ 00:22:53.777 { 00:22:53.777 "method": "framework_set_scheduler", 00:22:53.777 "params": { 00:22:53.777 "name": "static" 00:22:53.777 } 00:22:53.777 } 00:22:53.777 ] 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "subsystem": "nvmf", 00:22:53.777 "config": [ 00:22:53.777 { 00:22:53.777 "method": "nvmf_set_config", 00:22:53.777 "params": { 00:22:53.777 "admin_cmd_passthru": { 00:22:53.777 "identify_ctrlr": false 00:22:53.777 }, 00:22:53.777 "discovery_filter": "match_any" 00:22:53.777 } 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "method": "nvmf_set_max_subsystems", 00:22:53.777 "params": { 00:22:53.777 "max_subsystems": 1024 00:22:53.777 } 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "method": "nvmf_set_crdt", 00:22:53.777 "params": { 00:22:53.777 "crdt1": 0, 00:22:53.777 "crdt2": 0, 00:22:53.777 "crdt3": 0 00:22:53.777 } 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "method": "nvmf_create_transport", 00:22:53.777 "params": { 00:22:53.777 "abort_timeout_sec": 1, 00:22:53.777 "ack_timeout": 0, 00:22:53.777 "buf_cache_size": 4294967295, 00:22:53.777 "c2h_success": false, 00:22:53.777 "dif_insert_or_strip": false, 00:22:53.777 "in_capsule_data_size": 4096, 00:22:53.777 "io_unit_size": 131072, 00:22:53.777 "max_aq_depth": 128, 00:22:53.777 "max_io_qpairs_per_ctrlr": 127, 00:22:53.777 "max_io_size": 131072, 00:22:53.777 "max_queue_depth": 128, 00:22:53.777 "num_shared_buffers": 511, 00:22:53.777 "sock_priority": 0, 00:22:53.777 "trtype": "TCP", 00:22:53.777 "zcopy": false 00:22:53.777 } 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "method": "nvmf_create_subsystem", 00:22:53.777 "params": { 00:22:53.777 "allow_any_host": false, 00:22:53.777 "ana_reporting": false, 00:22:53.777 "max_cntlid": 65519, 00:22:53.777 "max_namespaces": 10, 00:22:53.777 "min_cntlid": 1, 00:22:53.777 "model_number": "SPDK bdev Controller", 00:22:53.777 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.777 "serial_number": "SPDK00000000000001" 00:22:53.777 } 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "method": "nvmf_subsystem_add_host", 00:22:53.777 "params": { 00:22:53.777 "host": "nqn.2016-06.io.spdk:host1", 00:22:53.777 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.777 "psk": "/tmp/tmp.qFD9mpxRsg" 00:22:53.777 } 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "method": "nvmf_subsystem_add_ns", 00:22:53.777 "params": { 00:22:53.777 "namespace": { 00:22:53.777 "bdev_name": "malloc0", 00:22:53.777 "nguid": "FBE4CA79212444CFAF7C374AE2A154C6", 00:22:53.777 "no_auto_visible": false, 00:22:53.777 "nsid": 1, 00:22:53.777 "uuid": "fbe4ca79-2124-44cf-af7c-374ae2a154c6" 00:22:53.777 }, 00:22:53.777 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:53.777 } 00:22:53.777 }, 00:22:53.777 { 00:22:53.777 "method": "nvmf_subsystem_add_listener", 00:22:53.777 "params": { 00:22:53.777 "listen_address": { 00:22:53.777 "adrfam": "IPv4", 00:22:53.777 "traddr": "10.0.0.2", 00:22:53.777 "trsvcid": "4420", 00:22:53.777 "trtype": "TCP" 00:22:53.777 }, 00:22:53.777 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.777 "secure_channel": true 00:22:53.777 } 00:22:53.777 } 00:22:53.777 ] 00:22:53.777 } 00:22:53.777 ] 00:22:53.777 }' 00:22:53.777 15:10:09 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:54.036 15:10:09 -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:54.036 "subsystems": [ 00:22:54.036 { 00:22:54.036 "subsystem": "keyring", 00:22:54.036 "config": [] 00:22:54.036 }, 00:22:54.036 { 00:22:54.036 "subsystem": "iobuf", 00:22:54.036 "config": [ 00:22:54.036 { 00:22:54.036 "method": "iobuf_set_options", 00:22:54.036 "params": { 00:22:54.037 "large_bufsize": 135168, 00:22:54.037 "large_pool_count": 1024, 00:22:54.037 "small_bufsize": 8192, 00:22:54.037 "small_pool_count": 8192 00:22:54.037 } 00:22:54.037 } 00:22:54.037 ] 00:22:54.037 }, 00:22:54.037 { 00:22:54.037 "subsystem": "sock", 00:22:54.037 "config": [ 00:22:54.037 { 00:22:54.037 "method": "sock_impl_set_options", 00:22:54.037 "params": { 00:22:54.037 "enable_ktls": false, 00:22:54.037 "enable_placement_id": 0, 00:22:54.037 "enable_quickack": false, 00:22:54.037 "enable_recv_pipe": true, 00:22:54.037 "enable_zerocopy_send_client": false, 00:22:54.037 "enable_zerocopy_send_server": true, 00:22:54.037 "impl_name": "posix", 00:22:54.037 "recv_buf_size": 2097152, 00:22:54.037 "send_buf_size": 2097152, 00:22:54.037 "tls_version": 0, 00:22:54.037 "zerocopy_threshold": 0 00:22:54.037 } 00:22:54.037 }, 00:22:54.037 { 00:22:54.037 "method": "sock_impl_set_options", 00:22:54.037 "params": { 00:22:54.037 "enable_ktls": false, 00:22:54.037 "enable_placement_id": 0, 00:22:54.037 "enable_quickack": false, 00:22:54.037 "enable_recv_pipe": true, 00:22:54.037 "enable_zerocopy_send_client": false, 00:22:54.037 "enable_zerocopy_send_server": true, 00:22:54.037 "impl_name": "ssl", 00:22:54.037 "recv_buf_size": 4096, 00:22:54.037 "send_buf_size": 4096, 00:22:54.037 "tls_version": 0, 00:22:54.037 "zerocopy_threshold": 0 00:22:54.037 } 00:22:54.037 } 00:22:54.037 ] 00:22:54.037 }, 00:22:54.037 { 00:22:54.037 "subsystem": "vmd", 00:22:54.037 "config": [] 00:22:54.037 }, 00:22:54.037 { 00:22:54.037 "subsystem": "accel", 00:22:54.037 "config": [ 00:22:54.037 { 00:22:54.037 "method": "accel_set_options", 00:22:54.037 "params": { 00:22:54.037 "buf_count": 2048, 00:22:54.037 "large_cache_size": 16, 00:22:54.037 "sequence_count": 2048, 00:22:54.037 "small_cache_size": 128, 00:22:54.037 "task_count": 2048 00:22:54.037 } 00:22:54.037 } 00:22:54.037 ] 00:22:54.037 }, 00:22:54.037 { 00:22:54.037 "subsystem": "bdev", 00:22:54.037 "config": [ 00:22:54.037 { 00:22:54.037 "method": "bdev_set_options", 00:22:54.037 "params": { 00:22:54.037 "bdev_auto_examine": true, 00:22:54.037 "bdev_io_cache_size": 256, 00:22:54.037 "bdev_io_pool_size": 65535, 00:22:54.037 "iobuf_large_cache_size": 16, 00:22:54.037 "iobuf_small_cache_size": 128 00:22:54.037 } 00:22:54.037 }, 00:22:54.037 { 00:22:54.037 "method": "bdev_raid_set_options", 00:22:54.037 "params": { 00:22:54.037 "process_window_size_kb": 1024 00:22:54.037 } 00:22:54.037 }, 00:22:54.037 { 00:22:54.037 "method": "bdev_iscsi_set_options", 00:22:54.037 "params": { 00:22:54.037 "timeout_sec": 30 00:22:54.037 } 00:22:54.037 }, 00:22:54.037 { 00:22:54.037 "method": "bdev_nvme_set_options", 00:22:54.037 "params": { 00:22:54.037 "action_on_timeout": "none", 00:22:54.037 "allow_accel_sequence": false, 00:22:54.037 "arbitration_burst": 0, 00:22:54.037 "bdev_retry_count": 3, 00:22:54.037 "ctrlr_loss_timeout_sec": 0, 00:22:54.037 "delay_cmd_submit": true, 00:22:54.037 "dhchap_dhgroups": [ 00:22:54.037 "null", 00:22:54.037 "ffdhe2048", 00:22:54.037 "ffdhe3072", 00:22:54.037 "ffdhe4096", 00:22:54.037 "ffdhe6144", 00:22:54.037 "ffdhe8192" 00:22:54.037 ], 00:22:54.037 "dhchap_digests": [ 00:22:54.037 "sha256", 00:22:54.037 "sha384", 00:22:54.037 "sha512" 00:22:54.037 ], 00:22:54.037 "disable_auto_failback": false, 00:22:54.037 "fast_io_fail_timeout_sec": 0, 00:22:54.037 "generate_uuids": false, 00:22:54.037 "high_priority_weight": 0, 00:22:54.037 "io_path_stat": false, 00:22:54.037 "io_queue_requests": 512, 00:22:54.037 "keep_alive_timeout_ms": 10000, 00:22:54.037 "low_priority_weight": 0, 00:22:54.037 "medium_priority_weight": 0, 00:22:54.037 "nvme_adminq_poll_period_us": 10000, 00:22:54.037 "nvme_error_stat": false, 00:22:54.037 "nvme_ioq_poll_period_us": 0, 00:22:54.037 "rdma_cm_event_timeout_ms": 0, 00:22:54.037 "rdma_max_cq_size": 0, 00:22:54.037 "rdma_srq_size": 0, 00:22:54.037 "reconnect_delay_sec": 0, 00:22:54.037 "timeout_admin_us": 0, 00:22:54.037 "timeout_us": 0, 00:22:54.037 "transport_ack_timeout": 0, 00:22:54.037 "transport_retry_count": 4, 00:22:54.037 "transport_tos": 0 00:22:54.037 } 00:22:54.037 }, 00:22:54.037 { 00:22:54.037 "method": "bdev_nvme_attach_controller", 00:22:54.037 "params": { 00:22:54.037 "adrfam": "IPv4", 00:22:54.037 "ctrlr_loss_timeout_sec": 0, 00:22:54.037 "ddgst": false, 00:22:54.037 "fast_io_fail_timeout_sec": 0, 00:22:54.037 "hdgst": false, 00:22:54.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.037 "name": "TLSTEST", 00:22:54.037 "prchk_guard": false, 00:22:54.037 "prchk_reftag": false, 00:22:54.037 "psk": "/tmp/tmp.qFD9mpxRsg", 00:22:54.037 "reconnect_delay_sec": 0, 00:22:54.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.037 "traddr": "10.0.0.2", 00:22:54.037 "trsvcid": "4420", 00:22:54.037 "trtype": "TCP" 00:22:54.037 } 00:22:54.037 }, 00:22:54.037 { 00:22:54.037 "method": "bdev_nvme_set_hotplug", 00:22:54.037 "params": { 00:22:54.037 "enable": false, 00:22:54.037 "period_us": 100000 00:22:54.037 } 00:22:54.037 }, 00:22:54.037 { 00:22:54.037 "method": "bdev_wait_for_examine" 00:22:54.037 } 00:22:54.037 ] 00:22:54.037 }, 00:22:54.037 { 00:22:54.037 "subsystem": "nbd", 00:22:54.037 "config": [] 00:22:54.037 } 00:22:54.037 ] 00:22:54.037 }' 00:22:54.037 15:10:09 -- target/tls.sh@199 -- # killprocess 78042 00:22:54.037 15:10:09 -- common/autotest_common.sh@936 -- # '[' -z 78042 ']' 00:22:54.037 15:10:09 -- common/autotest_common.sh@940 -- # kill -0 78042 00:22:54.037 15:10:09 -- common/autotest_common.sh@941 -- # uname 00:22:54.037 15:10:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:54.037 15:10:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78042 00:22:54.037 killing process with pid 78042 00:22:54.037 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.037 00:22:54.037 Latency(us) 00:22:54.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.037 =================================================================================================================== 00:22:54.037 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:54.037 15:10:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:54.037 15:10:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:54.037 15:10:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78042' 00:22:54.037 15:10:09 -- common/autotest_common.sh@955 -- # kill 78042 00:22:54.037 [2024-04-18 15:10:09.728979] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:54.037 15:10:09 -- common/autotest_common.sh@960 -- # wait 78042 00:22:54.296 15:10:09 -- target/tls.sh@200 -- # killprocess 77940 00:22:54.296 15:10:09 -- common/autotest_common.sh@936 -- # '[' -z 77940 ']' 00:22:54.296 15:10:09 -- common/autotest_common.sh@940 -- # kill -0 77940 00:22:54.296 15:10:09 -- common/autotest_common.sh@941 -- # uname 00:22:54.296 15:10:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:54.296 15:10:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77940 00:22:54.296 killing process with pid 77940 00:22:54.296 15:10:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:54.296 15:10:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:54.296 15:10:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77940' 00:22:54.296 15:10:09 -- common/autotest_common.sh@955 -- # kill 77940 00:22:54.296 [2024-04-18 15:10:09.994027] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:54.296 15:10:09 -- common/autotest_common.sh@960 -- # wait 77940 00:22:54.555 15:10:10 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:54.555 15:10:10 -- target/tls.sh@203 -- # echo '{ 00:22:54.555 "subsystems": [ 00:22:54.555 { 00:22:54.555 "subsystem": "keyring", 00:22:54.555 "config": [] 00:22:54.555 }, 00:22:54.555 { 00:22:54.555 "subsystem": "iobuf", 00:22:54.555 "config": [ 00:22:54.555 { 00:22:54.555 "method": "iobuf_set_options", 00:22:54.555 "params": { 00:22:54.555 "large_bufsize": 135168, 00:22:54.555 "large_pool_count": 1024, 00:22:54.555 "small_bufsize": 8192, 00:22:54.555 "small_pool_count": 8192 00:22:54.555 } 00:22:54.555 } 00:22:54.555 ] 00:22:54.555 }, 00:22:54.555 { 00:22:54.555 "subsystem": "sock", 00:22:54.555 "config": [ 00:22:54.555 { 00:22:54.555 "method": "sock_impl_set_options", 00:22:54.555 "params": { 00:22:54.555 "enable_ktls": false, 00:22:54.555 "enable_placement_id": 0, 00:22:54.555 "enable_quickack": false, 00:22:54.555 "enable_recv_pipe": true, 00:22:54.555 "enable_zerocopy_send_client": false, 00:22:54.555 "enable_zerocopy_send_server": true, 00:22:54.555 "impl_name": "posix", 00:22:54.555 "recv_buf_size": 2097152, 00:22:54.555 "send_buf_size": 2097152, 00:22:54.555 "tls_version": 0, 00:22:54.555 "zerocopy_threshold": 0 00:22:54.555 } 00:22:54.555 }, 00:22:54.555 { 00:22:54.555 "method": "sock_impl_set_options", 00:22:54.555 "params": { 00:22:54.556 "enable_ktls": false, 00:22:54.556 "enable_placement_id": 0, 00:22:54.556 "enable_quickack": false, 00:22:54.556 "enable_recv_pipe": true, 00:22:54.556 "enable_zerocopy_send_client": false, 00:22:54.556 "enable_zerocopy_send_server": true, 00:22:54.556 "impl_name": "ssl", 00:22:54.556 "recv_buf_size": 4096, 00:22:54.556 "send_buf_size": 4096, 00:22:54.556 "tls_version": 0, 00:22:54.556 "zerocopy_threshold": 0 00:22:54.556 } 00:22:54.556 } 00:22:54.556 ] 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "subsystem": "vmd", 00:22:54.556 "config": [] 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "subsystem": "accel", 00:22:54.556 "config": [ 00:22:54.556 { 00:22:54.556 "method": "accel_set_options", 00:22:54.556 "params": { 00:22:54.556 "buf_count": 2048, 00:22:54.556 "large_cache_size": 16, 00:22:54.556 "sequence_count": 2048, 00:22:54.556 "small_cache_size": 128, 00:22:54.556 "task_count": 2048 00:22:54.556 } 00:22:54.556 } 00:22:54.556 ] 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "subsystem": "bdev", 00:22:54.556 "config": [ 00:22:54.556 { 00:22:54.556 "method": "bdev_set_options", 00:22:54.556 "params": { 00:22:54.556 "bdev_auto_examine": true, 00:22:54.556 "bdev_io_cache_size": 256, 00:22:54.556 "bdev_io_pool_size": 65535, 00:22:54.556 "iobuf_large_cache_size": 16, 00:22:54.556 "iobuf_small_cache_size": 128 00:22:54.556 } 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "method": "bdev_raid_set_options", 00:22:54.556 "params": { 00:22:54.556 "process_window_size_kb": 1024 00:22:54.556 } 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "method": "bdev_iscsi_set_options", 00:22:54.556 "params": { 00:22:54.556 "timeout_sec": 30 00:22:54.556 } 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "method": "bdev_nvme_set_options", 00:22:54.556 "params": { 00:22:54.556 "action_on_timeout": "none", 00:22:54.556 "allow_accel_sequence": false, 00:22:54.556 "arbitration_burst": 0, 00:22:54.556 "bdev_retry_count": 3, 00:22:54.556 "ctrlr_loss_timeout_sec": 0, 00:22:54.556 "delay_cmd_submit": true, 00:22:54.556 "dhchap_dhgroups": [ 00:22:54.556 "null", 00:22:54.556 "ffdhe2048", 00:22:54.556 "ffdhe3072", 00:22:54.556 "ffdhe4096", 00:22:54.556 "ffdhe6144", 00:22:54.556 "ffdhe8192" 00:22:54.556 ], 00:22:54.556 "dhchap_digests": [ 00:22:54.556 "sha256", 00:22:54.556 "sha384", 00:22:54.556 "sha512" 00:22:54.556 ], 00:22:54.556 "disable_auto_failback": false, 00:22:54.556 "fast_io_fail_timeout_sec": 0, 00:22:54.556 "generate_uuids": false, 00:22:54.556 "high_priority_weight": 0, 00:22:54.556 "io_path_stat": false, 00:22:54.556 "io_queue_requests": 0, 00:22:54.556 "keep_alive_timeout_ms": 10000, 00:22:54.556 "low_priority_weight": 0, 00:22:54.556 "medium_priority_weight": 0, 00:22:54.556 "nvme_adminq_poll_period_us": 10000, 00:22:54.556 "nvme_error_stat": false, 00:22:54.556 "nvme_ioq_poll_period_us": 0, 00:22:54.556 "rdma_cm_event_timeout_ms": 0, 00:22:54.556 "rdma_max_cq_size": 0, 00:22:54.556 "rdma_srq_size": 0, 00:22:54.556 "reconnect_delay_sec": 0, 00:22:54.556 "timeout_admin_us": 0, 00:22:54.556 "timeout_us": 0, 00:22:54.556 "transport_ack_timeout": 0, 00:22:54.556 "transport_retry_count": 4, 00:22:54.556 "transport_tos": 0 00:22:54.556 } 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "method": "bdev_nvme_set_hotplug", 00:22:54.556 "params": { 00:22:54.556 "enable": false, 00:22:54.556 "period_us": 100000 00:22:54.556 } 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "method": "bdev_malloc_create", 00:22:54.556 "params": { 00:22:54.556 "block_size": 4096, 00:22:54.556 "name": "malloc0", 00:22:54.556 "num_blocks": 8192, 00:22:54.556 "optimal_io_boundary": 0, 00:22:54.556 "physical_block_size": 4096, 00:22:54.556 "uuid": "fbe4ca79-2124-44cf-af7c-374ae2a154c6" 00:22:54.556 } 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "method": "bdev_wait_for_examine" 00:22:54.556 } 00:22:54.556 ] 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "subsystem": "nbd", 00:22:54.556 "config": [] 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "subsystem": "scheduler", 00:22:54.556 "config": [ 00:22:54.556 { 00:22:54.556 "method": "framework_set_scheduler", 00:22:54.556 "params": { 00:22:54.556 "name": "static" 00:22:54.556 } 00:22:54.556 } 00:22:54.556 ] 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "subsystem": "nvmf", 00:22:54.556 "config": [ 00:22:54.556 { 00:22:54.556 "method": "nvmf_set_config", 00:22:54.556 "params": { 00:22:54.556 "admin_cmd_passthru": { 00:22:54.556 "identify_ctrlr": false 00:22:54.556 }, 00:22:54.556 "discovery_filter": "match_any" 00:22:54.556 } 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "method": "nvmf_set_max_subsystems", 00:22:54.556 "params": { 00:22:54.556 "max_subsystems": 1024 00:22:54.556 } 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "method": "nvmf_set_crdt", 00:22:54.556 "params": { 00:22:54.556 "crdt1": 0, 00:22:54.556 "crdt2": 0, 00:22:54.556 "crdt3": 0 00:22:54.556 } 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "method": "nvmf_create_transport", 00:22:54.556 "params": { 00:22:54.556 "abort_timeout_sec": 1, 00:22:54.556 "ack_timeout": 0, 00:22:54.556 "buf_cache_size": 4294967295, 00:22:54.556 "c2h_success": false, 00:22:54.556 "dif_insert_or_strip": false, 00:22:54.556 "in_capsule_data_size": 4096, 00:22:54.556 "io_unit_size": 131072, 00:22:54.556 "max_aq_depth": 128, 00:22:54.556 "max_io_qpairs_per_ctrlr": 127, 00:22:54.556 "max_io_size": 131072, 00:22:54.556 "max_queue_depth": 128, 00:22:54.556 "num_shared_buffers": 511, 00:22:54.556 "sock_priority": 0, 00:22:54.556 "trtype": "TCP", 00:22:54.556 "zcopy": false 00:22:54.556 } 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "method": "nvmf_create_subsystem", 00:22:54.556 "params": { 00:22:54.556 "allow_any_host": false, 00:22:54.556 "ana_reporting": false, 00:22:54.556 "max_cntlid": 65519, 00:22:54.556 "max_namespaces": 10, 00:22:54.556 "min_cntlid": 1, 00:22:54.556 "model_number": "SPDK bdev Controller", 00:22:54.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.556 "serial_number": "SPDK00000000000001" 00:22:54.556 } 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "method": "nvmf_subsystem_add_host", 00:22:54.556 "params": { 00:22:54.556 "host": "nqn.2016-06.io.spdk:host1", 00:22:54.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.556 "psk": "/tmp/tmp.qFD9mpxRsg" 00:22:54.556 } 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "method": "nvmf_subsystem_add_ns", 00:22:54.556 "params": { 00:22:54.556 "namespace": { 00:22:54.556 "bdev_name": "malloc0", 00:22:54.556 "nguid": "FBE4CA79212444CFAF7C374AE2A154C6", 00:22:54.556 "no_auto_visible": false, 00:22:54.556 "nsid": 1, 00:22:54.556 "uuid": "fbe4ca79-2124-44cf-af7c-374ae2a154c6" 00:22:54.556 }, 00:22:54.556 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:54.556 } 00:22:54.556 }, 00:22:54.556 { 00:22:54.556 "method": "nvmf_subsystem_add_listener", 00:22:54.556 "params": { 00:22:54.556 "listen_address": { 00:22:54.556 "adrfam": "IPv4", 00:22:54.556 "traddr": "10.0.0.2", 00:22:54.556 "trsvcid": "4420", 00:22:54.556 "trtype": "TCP" 00:22:54.556 }, 00:22:54.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.556 "secure_channel": true 00:22:54.556 } 00:22:54.557 } 00:22:54.557 ] 00:22:54.557 } 00:22:54.557 ] 00:22:54.557 }' 00:22:54.557 15:10:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:54.557 15:10:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:54.557 15:10:10 -- common/autotest_common.sh@10 -- # set +x 00:22:54.557 15:10:10 -- nvmf/common.sh@470 -- # nvmfpid=78119 00:22:54.557 15:10:10 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:54.557 15:10:10 -- nvmf/common.sh@471 -- # waitforlisten 78119 00:22:54.557 15:10:10 -- common/autotest_common.sh@817 -- # '[' -z 78119 ']' 00:22:54.557 15:10:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.557 15:10:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:54.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.557 15:10:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.557 15:10:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:54.557 15:10:10 -- common/autotest_common.sh@10 -- # set +x 00:22:54.815 [2024-04-18 15:10:10.281954] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:22:54.815 [2024-04-18 15:10:10.282044] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.815 [2024-04-18 15:10:10.414453] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.815 [2024-04-18 15:10:10.507883] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.815 [2024-04-18 15:10:10.508172] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.815 [2024-04-18 15:10:10.508236] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.815 [2024-04-18 15:10:10.508289] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.815 [2024-04-18 15:10:10.508345] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.815 [2024-04-18 15:10:10.508495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.074 [2024-04-18 15:10:10.711771] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.074 [2024-04-18 15:10:10.727694] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:55.074 [2024-04-18 15:10:10.743656] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.074 [2024-04-18 15:10:10.743963] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.643 15:10:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:55.643 15:10:11 -- common/autotest_common.sh@850 -- # return 0 00:22:55.643 15:10:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:55.643 15:10:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:55.643 15:10:11 -- common/autotest_common.sh@10 -- # set +x 00:22:55.643 15:10:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.643 15:10:11 -- target/tls.sh@207 -- # bdevperf_pid=78159 00:22:55.643 15:10:11 -- target/tls.sh@208 -- # waitforlisten 78159 /var/tmp/bdevperf.sock 00:22:55.643 15:10:11 -- common/autotest_common.sh@817 -- # '[' -z 78159 ']' 00:22:55.643 15:10:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.643 15:10:11 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:55.643 15:10:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:55.643 15:10:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.643 15:10:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:55.643 15:10:11 -- common/autotest_common.sh@10 -- # set +x 00:22:55.643 15:10:11 -- target/tls.sh@204 -- # echo '{ 00:22:55.643 "subsystems": [ 00:22:55.643 { 00:22:55.643 "subsystem": "keyring", 00:22:55.643 "config": [] 00:22:55.643 }, 00:22:55.643 { 00:22:55.643 "subsystem": "iobuf", 00:22:55.643 "config": [ 00:22:55.643 { 00:22:55.643 "method": "iobuf_set_options", 00:22:55.643 "params": { 00:22:55.643 "large_bufsize": 135168, 00:22:55.643 "large_pool_count": 1024, 00:22:55.643 "small_bufsize": 8192, 00:22:55.643 "small_pool_count": 8192 00:22:55.643 } 00:22:55.643 } 00:22:55.643 ] 00:22:55.643 }, 00:22:55.643 { 00:22:55.643 "subsystem": "sock", 00:22:55.643 "config": [ 00:22:55.643 { 00:22:55.643 "method": "sock_impl_set_options", 00:22:55.643 "params": { 00:22:55.643 "enable_ktls": false, 00:22:55.643 "enable_placement_id": 0, 00:22:55.643 "enable_quickack": false, 00:22:55.643 "enable_recv_pipe": true, 00:22:55.643 "enable_zerocopy_send_client": false, 00:22:55.643 "enable_zerocopy_send_server": true, 00:22:55.643 "impl_name": "posix", 00:22:55.643 "recv_buf_size": 2097152, 00:22:55.643 "send_buf_size": 2097152, 00:22:55.643 "tls_version": 0, 00:22:55.643 "zerocopy_threshold": 0 00:22:55.643 } 00:22:55.643 }, 00:22:55.643 { 00:22:55.644 "method": "sock_impl_set_options", 00:22:55.644 "params": { 00:22:55.644 "enable_ktls": false, 00:22:55.644 "enable_placement_id": 0, 00:22:55.644 "enable_quickack": false, 00:22:55.644 "enable_recv_pipe": true, 00:22:55.644 "enable_zerocopy_send_client": false, 00:22:55.644 "enable_zerocopy_send_server": true, 00:22:55.644 "impl_name": "ssl", 00:22:55.644 "recv_buf_size": 4096, 00:22:55.644 "send_buf_size": 4096, 00:22:55.644 "tls_version": 0, 00:22:55.644 "zerocopy_threshold": 0 00:22:55.644 } 00:22:55.644 } 00:22:55.644 ] 00:22:55.644 }, 00:22:55.644 { 00:22:55.644 "subsystem": "vmd", 00:22:55.644 "config": [] 00:22:55.644 }, 00:22:55.644 { 00:22:55.644 "subsystem": "accel", 00:22:55.644 "config": [ 00:22:55.644 { 00:22:55.644 "method": "accel_set_options", 00:22:55.644 "params": { 00:22:55.644 "buf_count": 2048, 00:22:55.644 "large_cache_size": 16, 00:22:55.644 "sequence_count": 2048, 00:22:55.644 "small_cache_size": 128, 00:22:55.644 "task_count": 2048 00:22:55.644 } 00:22:55.644 } 00:22:55.644 ] 00:22:55.644 }, 00:22:55.644 { 00:22:55.644 "subsystem": "bdev", 00:22:55.644 "config": [ 00:22:55.644 { 00:22:55.644 "method": "bdev_set_options", 00:22:55.644 "params": { 00:22:55.644 "bdev_auto_examine": true, 00:22:55.644 "bdev_io_cache_size": 256, 00:22:55.644 "bdev_io_pool_size": 65535, 00:22:55.644 "iobuf_large_cache_size": 16, 00:22:55.644 "iobuf_small_cache_size": 128 00:22:55.644 } 00:22:55.644 }, 00:22:55.644 { 00:22:55.644 "method": "bdev_raid_set_options", 00:22:55.644 "params": { 00:22:55.644 "process_window_size_kb": 1024 00:22:55.644 } 00:22:55.644 }, 00:22:55.644 { 00:22:55.644 "method": "bdev_iscsi_set_options", 00:22:55.644 "params": { 00:22:55.644 "timeout_sec": 30 00:22:55.644 } 00:22:55.644 }, 00:22:55.644 { 00:22:55.644 "method": "bdev_nvme_set_options", 00:22:55.644 "params": { 00:22:55.644 "action_on_timeout": "none", 00:22:55.644 "allow_accel_sequence": false, 00:22:55.644 "arbitration_burst": 0, 00:22:55.644 "bdev_retry_count": 3, 00:22:55.644 "ctrlr_loss_timeout_sec": 0, 00:22:55.644 "delay_cmd_submit": true, 00:22:55.644 "dhchap_dhgroups": [ 00:22:55.644 "null", 00:22:55.644 "ffdhe2048", 00:22:55.644 "ffdhe3072", 00:22:55.644 "ffdhe4096", 00:22:55.644 "ffdhe6144", 00:22:55.644 "ffdhe8192" 00:22:55.644 ], 00:22:55.644 "dhchap_digests": [ 00:22:55.644 "sha256", 00:22:55.644 "sha384", 00:22:55.644 "sha512" 00:22:55.644 ], 00:22:55.644 "disable_auto_failback": false, 00:22:55.644 "fast_io_fail_timeout_sec": 0, 00:22:55.644 "generate_uuids": false, 00:22:55.644 "high_priority_weight": 0, 00:22:55.644 "io_path_stat": false, 00:22:55.644 "io_queue_requests": 512, 00:22:55.644 "keep_alive_timeout_ms": 10000, 00:22:55.644 "low_priority_weight": 0, 00:22:55.644 "medium_priority_weight": 0, 00:22:55.644 "nvme_adminq_poll_period_us": 10000, 00:22:55.644 "nvme_error_stat": false, 00:22:55.644 "nvme_ioq_poll_period_us": 0, 00:22:55.644 "rdma_cm_event_timeout_ms": 0, 00:22:55.644 "rdma_max_cq_size": 0, 00:22:55.644 "rdma_srq_size": 0, 00:22:55.644 "reconnect_delay_sec": 0, 00:22:55.644 "timeout_admin_us": 0, 00:22:55.644 "timeout_us": 0, 00:22:55.644 "transport_ack_timeout": 0, 00:22:55.644 "transport_retry_count": 4, 00:22:55.644 "transport_tos": 0 00:22:55.644 } 00:22:55.644 }, 00:22:55.644 { 00:22:55.644 "method": "bdev_nvme_attach_controller", 00:22:55.644 "params": { 00:22:55.644 "adrfam": "IPv4", 00:22:55.644 "ctrlr_loss_timeout_sec": 0, 00:22:55.644 "ddgst": false, 00:22:55.644 "fast_io_fail_timeout_sec": 0, 00:22:55.644 "hdgst": false, 00:22:55.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.644 "name": "TLSTEST", 00:22:55.644 "prchk_guard": false, 00:22:55.644 "prchk_reftag": false, 00:22:55.644 "psk": "/tmp/tmp.qFD9mpxRsg", 00:22:55.644 "reconnect_delay_sec": 0, 00:22:55.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.644 "traddr": "10.0.0.2", 00:22:55.644 "trsvcid": "4420", 00:22:55.644 "trtype": "TCP" 00:22:55.644 } 00:22:55.644 }, 00:22:55.644 { 00:22:55.644 "method": "bdev_nvme_set_hotplug", 00:22:55.644 "params": { 00:22:55.644 "enable": false, 00:22:55.644 "period_us": 100000 00:22:55.644 } 00:22:55.644 }, 00:22:55.644 { 00:22:55.644 "method": "bdev_wait_for_examine" 00:22:55.644 } 00:22:55.644 ] 00:22:55.644 }, 00:22:55.644 { 00:22:55.644 "subsystem": "nbd", 00:22:55.644 "config": [] 00:22:55.644 } 00:22:55.644 ] 00:22:55.644 }' 00:22:55.644 [2024-04-18 15:10:11.255640] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:22:55.644 [2024-04-18 15:10:11.256279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78159 ] 00:22:55.903 [2024-04-18 15:10:11.406331] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.903 [2024-04-18 15:10:11.506354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.162 [2024-04-18 15:10:11.661321] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.162 [2024-04-18 15:10:11.661478] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:56.730 15:10:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:56.730 15:10:12 -- common/autotest_common.sh@850 -- # return 0 00:22:56.730 15:10:12 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:56.730 Running I/O for 10 seconds... 00:23:06.751 00:23:06.751 Latency(us) 00:23:06.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.751 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:06.751 Verification LBA range: start 0x0 length 0x2000 00:23:06.751 TLSTESTn1 : 10.01 5534.87 21.62 0.00 0.00 23089.66 4579.62 21371.58 00:23:06.751 =================================================================================================================== 00:23:06.751 Total : 5534.87 21.62 0.00 0.00 23089.66 4579.62 21371.58 00:23:06.751 0 00:23:06.751 15:10:22 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.751 15:10:22 -- target/tls.sh@214 -- # killprocess 78159 00:23:06.751 15:10:22 -- common/autotest_common.sh@936 -- # '[' -z 78159 ']' 00:23:06.751 15:10:22 -- common/autotest_common.sh@940 -- # kill -0 78159 00:23:06.751 15:10:22 -- common/autotest_common.sh@941 -- # uname 00:23:06.751 15:10:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:06.752 15:10:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78159 00:23:06.752 15:10:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:06.752 15:10:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:06.752 killing process with pid 78159 00:23:06.752 15:10:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78159' 00:23:06.752 Received shutdown signal, test time was about 10.000000 seconds 00:23:06.752 00:23:06.752 Latency(us) 00:23:06.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.752 =================================================================================================================== 00:23:06.752 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:06.752 15:10:22 -- common/autotest_common.sh@955 -- # kill 78159 00:23:06.752 [2024-04-18 15:10:22.349950] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:06.752 15:10:22 -- common/autotest_common.sh@960 -- # wait 78159 00:23:07.010 15:10:22 -- target/tls.sh@215 -- # killprocess 78119 00:23:07.011 15:10:22 -- common/autotest_common.sh@936 -- # '[' -z 78119 ']' 00:23:07.011 15:10:22 -- common/autotest_common.sh@940 -- # kill -0 78119 00:23:07.011 15:10:22 -- common/autotest_common.sh@941 -- # uname 00:23:07.011 15:10:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:07.011 15:10:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78119 00:23:07.011 15:10:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:07.011 15:10:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:07.011 killing process with pid 78119 00:23:07.011 15:10:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78119' 00:23:07.011 15:10:22 -- common/autotest_common.sh@955 -- # kill 78119 00:23:07.011 [2024-04-18 15:10:22.627080] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:07.011 15:10:22 -- common/autotest_common.sh@960 -- # wait 78119 00:23:07.270 15:10:22 -- target/tls.sh@218 -- # nvmfappstart 00:23:07.270 15:10:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:07.270 15:10:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:07.270 15:10:22 -- common/autotest_common.sh@10 -- # set +x 00:23:07.270 15:10:22 -- nvmf/common.sh@470 -- # nvmfpid=78309 00:23:07.270 15:10:22 -- nvmf/common.sh@471 -- # waitforlisten 78309 00:23:07.270 15:10:22 -- common/autotest_common.sh@817 -- # '[' -z 78309 ']' 00:23:07.270 15:10:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.270 15:10:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:07.270 15:10:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.270 15:10:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:07.270 15:10:22 -- common/autotest_common.sh@10 -- # set +x 00:23:07.270 15:10:22 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:07.270 [2024-04-18 15:10:22.918904] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:07.270 [2024-04-18 15:10:22.918992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.529 [2024-04-18 15:10:23.061278] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.529 [2024-04-18 15:10:23.154074] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.529 [2024-04-18 15:10:23.154146] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.529 [2024-04-18 15:10:23.154157] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.529 [2024-04-18 15:10:23.154167] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.529 [2024-04-18 15:10:23.154175] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.529 [2024-04-18 15:10:23.154220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.468 15:10:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:08.468 15:10:23 -- common/autotest_common.sh@850 -- # return 0 00:23:08.468 15:10:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:08.468 15:10:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:08.468 15:10:23 -- common/autotest_common.sh@10 -- # set +x 00:23:08.468 15:10:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.468 15:10:23 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.qFD9mpxRsg 00:23:08.468 15:10:23 -- target/tls.sh@49 -- # local key=/tmp/tmp.qFD9mpxRsg 00:23:08.468 15:10:23 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:08.468 [2024-04-18 15:10:24.092875] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.468 15:10:24 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:08.727 15:10:24 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:08.985 [2024-04-18 15:10:24.576201] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:08.985 [2024-04-18 15:10:24.576455] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.985 15:10:24 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:09.244 malloc0 00:23:09.244 15:10:24 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:09.503 15:10:25 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qFD9mpxRsg 00:23:09.763 [2024-04-18 15:10:25.260436] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:09.763 15:10:25 -- target/tls.sh@222 -- # bdevperf_pid=78413 00:23:09.763 15:10:25 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:09.763 15:10:25 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.763 15:10:25 -- target/tls.sh@225 -- # waitforlisten 78413 /var/tmp/bdevperf.sock 00:23:09.763 15:10:25 -- common/autotest_common.sh@817 -- # '[' -z 78413 ']' 00:23:09.763 15:10:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.763 15:10:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:09.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.763 15:10:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.763 15:10:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:09.763 15:10:25 -- common/autotest_common.sh@10 -- # set +x 00:23:09.763 [2024-04-18 15:10:25.342795] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:09.763 [2024-04-18 15:10:25.342911] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78413 ] 00:23:10.022 [2024-04-18 15:10:25.485977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.022 [2024-04-18 15:10:25.579593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.592 15:10:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:10.592 15:10:26 -- common/autotest_common.sh@850 -- # return 0 00:23:10.592 15:10:26 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qFD9mpxRsg 00:23:10.850 15:10:26 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:11.109 [2024-04-18 15:10:26.643212] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.109 nvme0n1 00:23:11.109 15:10:26 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:11.368 Running I/O for 1 seconds... 00:23:12.352 00:23:12.352 Latency(us) 00:23:12.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.352 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:12.352 Verification LBA range: start 0x0 length 0x2000 00:23:12.352 nvme0n1 : 1.01 5408.92 21.13 0.00 0.00 23478.53 4869.14 19687.12 00:23:12.352 =================================================================================================================== 00:23:12.352 Total : 5408.92 21.13 0.00 0.00 23478.53 4869.14 19687.12 00:23:12.352 0 00:23:12.352 15:10:27 -- target/tls.sh@234 -- # killprocess 78413 00:23:12.352 15:10:27 -- common/autotest_common.sh@936 -- # '[' -z 78413 ']' 00:23:12.352 15:10:27 -- common/autotest_common.sh@940 -- # kill -0 78413 00:23:12.352 15:10:27 -- common/autotest_common.sh@941 -- # uname 00:23:12.352 15:10:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:12.352 15:10:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78413 00:23:12.352 15:10:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:12.352 15:10:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:12.352 killing process with pid 78413 00:23:12.352 15:10:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78413' 00:23:12.352 Received shutdown signal, test time was about 1.000000 seconds 00:23:12.352 00:23:12.352 Latency(us) 00:23:12.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.352 =================================================================================================================== 00:23:12.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:12.352 15:10:27 -- common/autotest_common.sh@955 -- # kill 78413 00:23:12.352 15:10:27 -- common/autotest_common.sh@960 -- # wait 78413 00:23:12.610 15:10:28 -- target/tls.sh@235 -- # killprocess 78309 00:23:12.610 15:10:28 -- common/autotest_common.sh@936 -- # '[' -z 78309 ']' 00:23:12.610 15:10:28 -- common/autotest_common.sh@940 -- # kill -0 78309 00:23:12.610 15:10:28 -- common/autotest_common.sh@941 -- # uname 00:23:12.610 15:10:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:12.610 15:10:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78309 00:23:12.610 15:10:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:12.610 15:10:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:12.610 killing process with pid 78309 00:23:12.610 15:10:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78309' 00:23:12.610 15:10:28 -- common/autotest_common.sh@955 -- # kill 78309 00:23:12.611 [2024-04-18 15:10:28.181862] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:12.611 15:10:28 -- common/autotest_common.sh@960 -- # wait 78309 00:23:12.869 15:10:28 -- target/tls.sh@238 -- # nvmfappstart 00:23:12.869 15:10:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:12.869 15:10:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:12.869 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:23:12.869 15:10:28 -- nvmf/common.sh@470 -- # nvmfpid=78489 00:23:12.869 15:10:28 -- nvmf/common.sh@471 -- # waitforlisten 78489 00:23:12.869 15:10:28 -- common/autotest_common.sh@817 -- # '[' -z 78489 ']' 00:23:12.869 15:10:28 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:12.869 15:10:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.869 15:10:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:12.869 15:10:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.869 15:10:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:12.869 15:10:28 -- common/autotest_common.sh@10 -- # set +x 00:23:12.870 [2024-04-18 15:10:28.474357] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:12.870 [2024-04-18 15:10:28.474441] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.128 [2024-04-18 15:10:28.618070] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.128 [2024-04-18 15:10:28.698088] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.128 [2024-04-18 15:10:28.698150] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.128 [2024-04-18 15:10:28.698168] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.128 [2024-04-18 15:10:28.698180] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.128 [2024-04-18 15:10:28.698190] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.128 [2024-04-18 15:10:28.698243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.695 15:10:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:13.695 15:10:29 -- common/autotest_common.sh@850 -- # return 0 00:23:13.695 15:10:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:13.695 15:10:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:13.695 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:23:13.695 15:10:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.695 15:10:29 -- target/tls.sh@239 -- # rpc_cmd 00:23:13.695 15:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.696 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:23:13.955 [2024-04-18 15:10:29.404937] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.955 malloc0 00:23:13.955 [2024-04-18 15:10:29.437916] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:13.955 [2024-04-18 15:10:29.438381] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.955 15:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.955 15:10:29 -- target/tls.sh@252 -- # bdevperf_pid=78538 00:23:13.955 15:10:29 -- target/tls.sh@254 -- # waitforlisten 78538 /var/tmp/bdevperf.sock 00:23:13.955 15:10:29 -- common/autotest_common.sh@817 -- # '[' -z 78538 ']' 00:23:13.955 15:10:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.955 15:10:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:13.955 15:10:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.955 15:10:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:13.956 15:10:29 -- common/autotest_common.sh@10 -- # set +x 00:23:13.956 15:10:29 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:13.956 [2024-04-18 15:10:29.521218] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:13.956 [2024-04-18 15:10:29.521294] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78538 ] 00:23:13.956 [2024-04-18 15:10:29.655082] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.215 [2024-04-18 15:10:29.736731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.815 15:10:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:14.816 15:10:30 -- common/autotest_common.sh@850 -- # return 0 00:23:14.816 15:10:30 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qFD9mpxRsg 00:23:15.076 15:10:30 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:15.335 [2024-04-18 15:10:30.826687] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.335 nvme0n1 00:23:15.335 15:10:30 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:15.335 Running I/O for 1 seconds... 00:23:16.716 00:23:16.716 Latency(us) 00:23:16.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.716 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:16.716 Verification LBA range: start 0x0 length 0x2000 00:23:16.716 nvme0n1 : 1.01 5505.93 21.51 0.00 0.00 23072.35 4842.82 18529.05 00:23:16.716 =================================================================================================================== 00:23:16.716 Total : 5505.93 21.51 0.00 0.00 23072.35 4842.82 18529.05 00:23:16.716 0 00:23:16.716 15:10:32 -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:16.716 15:10:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.716 15:10:32 -- common/autotest_common.sh@10 -- # set +x 00:23:16.716 15:10:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.716 15:10:32 -- target/tls.sh@263 -- # tgtcfg='{ 00:23:16.716 "subsystems": [ 00:23:16.716 { 00:23:16.716 "subsystem": "keyring", 00:23:16.716 "config": [ 00:23:16.716 { 00:23:16.716 "method": "keyring_file_add_key", 00:23:16.716 "params": { 00:23:16.716 "name": "key0", 00:23:16.716 "path": "/tmp/tmp.qFD9mpxRsg" 00:23:16.716 } 00:23:16.716 } 00:23:16.716 ] 00:23:16.716 }, 00:23:16.716 { 00:23:16.716 "subsystem": "iobuf", 00:23:16.716 "config": [ 00:23:16.716 { 00:23:16.716 "method": "iobuf_set_options", 00:23:16.716 "params": { 00:23:16.716 "large_bufsize": 135168, 00:23:16.716 "large_pool_count": 1024, 00:23:16.716 "small_bufsize": 8192, 00:23:16.716 "small_pool_count": 8192 00:23:16.716 } 00:23:16.716 } 00:23:16.716 ] 00:23:16.716 }, 00:23:16.716 { 00:23:16.716 "subsystem": "sock", 00:23:16.716 "config": [ 00:23:16.716 { 00:23:16.716 "method": "sock_impl_set_options", 00:23:16.716 "params": { 00:23:16.716 "enable_ktls": false, 00:23:16.716 "enable_placement_id": 0, 00:23:16.716 "enable_quickack": false, 00:23:16.716 "enable_recv_pipe": true, 00:23:16.716 "enable_zerocopy_send_client": false, 00:23:16.716 "enable_zerocopy_send_server": true, 00:23:16.716 "impl_name": "posix", 00:23:16.716 "recv_buf_size": 2097152, 00:23:16.716 "send_buf_size": 2097152, 00:23:16.716 "tls_version": 0, 00:23:16.716 "zerocopy_threshold": 0 00:23:16.716 } 00:23:16.716 }, 00:23:16.716 { 00:23:16.716 "method": "sock_impl_set_options", 00:23:16.716 "params": { 00:23:16.716 "enable_ktls": false, 00:23:16.716 "enable_placement_id": 0, 00:23:16.716 "enable_quickack": false, 00:23:16.716 "enable_recv_pipe": true, 00:23:16.716 "enable_zerocopy_send_client": false, 00:23:16.716 "enable_zerocopy_send_server": true, 00:23:16.716 "impl_name": "ssl", 00:23:16.716 "recv_buf_size": 4096, 00:23:16.716 "send_buf_size": 4096, 00:23:16.716 "tls_version": 0, 00:23:16.716 "zerocopy_threshold": 0 00:23:16.716 } 00:23:16.716 } 00:23:16.716 ] 00:23:16.716 }, 00:23:16.716 { 00:23:16.716 "subsystem": "vmd", 00:23:16.716 "config": [] 00:23:16.716 }, 00:23:16.716 { 00:23:16.716 "subsystem": "accel", 00:23:16.716 "config": [ 00:23:16.716 { 00:23:16.716 "method": "accel_set_options", 00:23:16.716 "params": { 00:23:16.716 "buf_count": 2048, 00:23:16.716 "large_cache_size": 16, 00:23:16.716 "sequence_count": 2048, 00:23:16.716 "small_cache_size": 128, 00:23:16.716 "task_count": 2048 00:23:16.716 } 00:23:16.716 } 00:23:16.716 ] 00:23:16.716 }, 00:23:16.716 { 00:23:16.717 "subsystem": "bdev", 00:23:16.717 "config": [ 00:23:16.717 { 00:23:16.717 "method": "bdev_set_options", 00:23:16.717 "params": { 00:23:16.717 "bdev_auto_examine": true, 00:23:16.717 "bdev_io_cache_size": 256, 00:23:16.717 "bdev_io_pool_size": 65535, 00:23:16.717 "iobuf_large_cache_size": 16, 00:23:16.717 "iobuf_small_cache_size": 128 00:23:16.717 } 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "method": "bdev_raid_set_options", 00:23:16.717 "params": { 00:23:16.717 "process_window_size_kb": 1024 00:23:16.717 } 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "method": "bdev_iscsi_set_options", 00:23:16.717 "params": { 00:23:16.717 "timeout_sec": 30 00:23:16.717 } 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "method": "bdev_nvme_set_options", 00:23:16.717 "params": { 00:23:16.717 "action_on_timeout": "none", 00:23:16.717 "allow_accel_sequence": false, 00:23:16.717 "arbitration_burst": 0, 00:23:16.717 "bdev_retry_count": 3, 00:23:16.717 "ctrlr_loss_timeout_sec": 0, 00:23:16.717 "delay_cmd_submit": true, 00:23:16.717 "dhchap_dhgroups": [ 00:23:16.717 "null", 00:23:16.717 "ffdhe2048", 00:23:16.717 "ffdhe3072", 00:23:16.717 "ffdhe4096", 00:23:16.717 "ffdhe6144", 00:23:16.717 "ffdhe8192" 00:23:16.717 ], 00:23:16.717 "dhchap_digests": [ 00:23:16.717 "sha256", 00:23:16.717 "sha384", 00:23:16.717 "sha512" 00:23:16.717 ], 00:23:16.717 "disable_auto_failback": false, 00:23:16.717 "fast_io_fail_timeout_sec": 0, 00:23:16.717 "generate_uuids": false, 00:23:16.717 "high_priority_weight": 0, 00:23:16.717 "io_path_stat": false, 00:23:16.717 "io_queue_requests": 0, 00:23:16.717 "keep_alive_timeout_ms": 10000, 00:23:16.717 "low_priority_weight": 0, 00:23:16.717 "medium_priority_weight": 0, 00:23:16.717 "nvme_adminq_poll_period_us": 10000, 00:23:16.717 "nvme_error_stat": false, 00:23:16.717 "nvme_ioq_poll_period_us": 0, 00:23:16.717 "rdma_cm_event_timeout_ms": 0, 00:23:16.717 "rdma_max_cq_size": 0, 00:23:16.717 "rdma_srq_size": 0, 00:23:16.717 "reconnect_delay_sec": 0, 00:23:16.717 "timeout_admin_us": 0, 00:23:16.717 "timeout_us": 0, 00:23:16.717 "transport_ack_timeout": 0, 00:23:16.717 "transport_retry_count": 4, 00:23:16.717 "transport_tos": 0 00:23:16.717 } 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "method": "bdev_nvme_set_hotplug", 00:23:16.717 "params": { 00:23:16.717 "enable": false, 00:23:16.717 "period_us": 100000 00:23:16.717 } 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "method": "bdev_malloc_create", 00:23:16.717 "params": { 00:23:16.717 "block_size": 4096, 00:23:16.717 "name": "malloc0", 00:23:16.717 "num_blocks": 8192, 00:23:16.717 "optimal_io_boundary": 0, 00:23:16.717 "physical_block_size": 4096, 00:23:16.717 "uuid": "52e3ed04-43ca-4f0b-a1f7-0a33640cd272" 00:23:16.717 } 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "method": "bdev_wait_for_examine" 00:23:16.717 } 00:23:16.717 ] 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "subsystem": "nbd", 00:23:16.717 "config": [] 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "subsystem": "scheduler", 00:23:16.717 "config": [ 00:23:16.717 { 00:23:16.717 "method": "framework_set_scheduler", 00:23:16.717 "params": { 00:23:16.717 "name": "static" 00:23:16.717 } 00:23:16.717 } 00:23:16.717 ] 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "subsystem": "nvmf", 00:23:16.717 "config": [ 00:23:16.717 { 00:23:16.717 "method": "nvmf_set_config", 00:23:16.717 "params": { 00:23:16.717 "admin_cmd_passthru": { 00:23:16.717 "identify_ctrlr": false 00:23:16.717 }, 00:23:16.717 "discovery_filter": "match_any" 00:23:16.717 } 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "method": "nvmf_set_max_subsystems", 00:23:16.717 "params": { 00:23:16.717 "max_subsystems": 1024 00:23:16.717 } 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "method": "nvmf_set_crdt", 00:23:16.717 "params": { 00:23:16.717 "crdt1": 0, 00:23:16.717 "crdt2": 0, 00:23:16.717 "crdt3": 0 00:23:16.717 } 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "method": "nvmf_create_transport", 00:23:16.717 "params": { 00:23:16.717 "abort_timeout_sec": 1, 00:23:16.717 "ack_timeout": 0, 00:23:16.717 "buf_cache_size": 4294967295, 00:23:16.717 "c2h_success": false, 00:23:16.717 "dif_insert_or_strip": false, 00:23:16.717 "in_capsule_data_size": 4096, 00:23:16.717 "io_unit_size": 131072, 00:23:16.717 "max_aq_depth": 128, 00:23:16.717 "max_io_qpairs_per_ctrlr": 127, 00:23:16.717 "max_io_size": 131072, 00:23:16.717 "max_queue_depth": 128, 00:23:16.717 "num_shared_buffers": 511, 00:23:16.717 "sock_priority": 0, 00:23:16.717 "trtype": "TCP", 00:23:16.717 "zcopy": false 00:23:16.717 } 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "method": "nvmf_create_subsystem", 00:23:16.717 "params": { 00:23:16.717 "allow_any_host": false, 00:23:16.717 "ana_reporting": false, 00:23:16.717 "max_cntlid": 65519, 00:23:16.717 "max_namespaces": 32, 00:23:16.717 "min_cntlid": 1, 00:23:16.717 "model_number": "SPDK bdev Controller", 00:23:16.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.717 "serial_number": "00000000000000000000" 00:23:16.717 } 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "method": "nvmf_subsystem_add_host", 00:23:16.717 "params": { 00:23:16.717 "host": "nqn.2016-06.io.spdk:host1", 00:23:16.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.717 "psk": "key0" 00:23:16.717 } 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "method": "nvmf_subsystem_add_ns", 00:23:16.717 "params": { 00:23:16.717 "namespace": { 00:23:16.717 "bdev_name": "malloc0", 00:23:16.717 "nguid": "52E3ED0443CA4F0BA1F70A33640CD272", 00:23:16.717 "no_auto_visible": false, 00:23:16.717 "nsid": 1, 00:23:16.717 "uuid": "52e3ed04-43ca-4f0b-a1f7-0a33640cd272" 00:23:16.717 }, 00:23:16.717 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:16.717 } 00:23:16.717 }, 00:23:16.717 { 00:23:16.717 "method": "nvmf_subsystem_add_listener", 00:23:16.717 "params": { 00:23:16.717 "listen_address": { 00:23:16.717 "adrfam": "IPv4", 00:23:16.717 "traddr": "10.0.0.2", 00:23:16.717 "trsvcid": "4420", 00:23:16.717 "trtype": "TCP" 00:23:16.717 }, 00:23:16.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.717 "secure_channel": true 00:23:16.717 } 00:23:16.717 } 00:23:16.717 ] 00:23:16.717 } 00:23:16.717 ] 00:23:16.717 }' 00:23:16.717 15:10:32 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:16.976 15:10:32 -- target/tls.sh@264 -- # bperfcfg='{ 00:23:16.976 "subsystems": [ 00:23:16.976 { 00:23:16.976 "subsystem": "keyring", 00:23:16.976 "config": [ 00:23:16.976 { 00:23:16.976 "method": "keyring_file_add_key", 00:23:16.976 "params": { 00:23:16.976 "name": "key0", 00:23:16.976 "path": "/tmp/tmp.qFD9mpxRsg" 00:23:16.976 } 00:23:16.976 } 00:23:16.976 ] 00:23:16.976 }, 00:23:16.976 { 00:23:16.976 "subsystem": "iobuf", 00:23:16.976 "config": [ 00:23:16.976 { 00:23:16.976 "method": "iobuf_set_options", 00:23:16.976 "params": { 00:23:16.976 "large_bufsize": 135168, 00:23:16.976 "large_pool_count": 1024, 00:23:16.976 "small_bufsize": 8192, 00:23:16.976 "small_pool_count": 8192 00:23:16.976 } 00:23:16.976 } 00:23:16.976 ] 00:23:16.976 }, 00:23:16.976 { 00:23:16.976 "subsystem": "sock", 00:23:16.976 "config": [ 00:23:16.976 { 00:23:16.976 "method": "sock_impl_set_options", 00:23:16.976 "params": { 00:23:16.976 "enable_ktls": false, 00:23:16.976 "enable_placement_id": 0, 00:23:16.976 "enable_quickack": false, 00:23:16.976 "enable_recv_pipe": true, 00:23:16.976 "enable_zerocopy_send_client": false, 00:23:16.976 "enable_zerocopy_send_server": true, 00:23:16.976 "impl_name": "posix", 00:23:16.976 "recv_buf_size": 2097152, 00:23:16.976 "send_buf_size": 2097152, 00:23:16.976 "tls_version": 0, 00:23:16.976 "zerocopy_threshold": 0 00:23:16.976 } 00:23:16.977 }, 00:23:16.977 { 00:23:16.977 "method": "sock_impl_set_options", 00:23:16.977 "params": { 00:23:16.977 "enable_ktls": false, 00:23:16.977 "enable_placement_id": 0, 00:23:16.977 "enable_quickack": false, 00:23:16.977 "enable_recv_pipe": true, 00:23:16.977 "enable_zerocopy_send_client": false, 00:23:16.977 "enable_zerocopy_send_server": true, 00:23:16.977 "impl_name": "ssl", 00:23:16.977 "recv_buf_size": 4096, 00:23:16.977 "send_buf_size": 4096, 00:23:16.977 "tls_version": 0, 00:23:16.977 "zerocopy_threshold": 0 00:23:16.977 } 00:23:16.977 } 00:23:16.977 ] 00:23:16.977 }, 00:23:16.977 { 00:23:16.977 "subsystem": "vmd", 00:23:16.977 "config": [] 00:23:16.977 }, 00:23:16.977 { 00:23:16.977 "subsystem": "accel", 00:23:16.977 "config": [ 00:23:16.977 { 00:23:16.977 "method": "accel_set_options", 00:23:16.977 "params": { 00:23:16.977 "buf_count": 2048, 00:23:16.977 "large_cache_size": 16, 00:23:16.977 "sequence_count": 2048, 00:23:16.977 "small_cache_size": 128, 00:23:16.977 "task_count": 2048 00:23:16.977 } 00:23:16.977 } 00:23:16.977 ] 00:23:16.977 }, 00:23:16.977 { 00:23:16.977 "subsystem": "bdev", 00:23:16.977 "config": [ 00:23:16.977 { 00:23:16.977 "method": "bdev_set_options", 00:23:16.977 "params": { 00:23:16.977 "bdev_auto_examine": true, 00:23:16.977 "bdev_io_cache_size": 256, 00:23:16.977 "bdev_io_pool_size": 65535, 00:23:16.977 "iobuf_large_cache_size": 16, 00:23:16.977 "iobuf_small_cache_size": 128 00:23:16.977 } 00:23:16.977 }, 00:23:16.977 { 00:23:16.977 "method": "bdev_raid_set_options", 00:23:16.977 "params": { 00:23:16.977 "process_window_size_kb": 1024 00:23:16.977 } 00:23:16.977 }, 00:23:16.977 { 00:23:16.977 "method": "bdev_iscsi_set_options", 00:23:16.977 "params": { 00:23:16.977 "timeout_sec": 30 00:23:16.977 } 00:23:16.977 }, 00:23:16.977 { 00:23:16.977 "method": "bdev_nvme_set_options", 00:23:16.977 "params": { 00:23:16.977 "action_on_timeout": "none", 00:23:16.977 "allow_accel_sequence": false, 00:23:16.977 "arbitration_burst": 0, 00:23:16.977 "bdev_retry_count": 3, 00:23:16.977 "ctrlr_loss_timeout_sec": 0, 00:23:16.977 "delay_cmd_submit": true, 00:23:16.977 "dhchap_dhgroups": [ 00:23:16.977 "null", 00:23:16.977 "ffdhe2048", 00:23:16.977 "ffdhe3072", 00:23:16.977 "ffdhe4096", 00:23:16.977 "ffdhe6144", 00:23:16.977 "ffdhe8192" 00:23:16.977 ], 00:23:16.977 "dhchap_digests": [ 00:23:16.977 "sha256", 00:23:16.977 "sha384", 00:23:16.977 "sha512" 00:23:16.977 ], 00:23:16.977 "disable_auto_failback": false, 00:23:16.977 "fast_io_fail_timeout_sec": 0, 00:23:16.977 "generate_uuids": false, 00:23:16.977 "high_priority_weight": 0, 00:23:16.977 "io_path_stat": false, 00:23:16.977 "io_queue_requests": 512, 00:23:16.977 "keep_alive_timeout_ms": 10000, 00:23:16.977 "low_priority_weight": 0, 00:23:16.977 "medium_priority_weight": 0, 00:23:16.977 "nvme_adminq_poll_period_us": 10000, 00:23:16.977 "nvme_error_stat": false, 00:23:16.977 "nvme_ioq_poll_period_us": 0, 00:23:16.977 "rdma_cm_event_timeout_ms": 0, 00:23:16.977 "rdma_max_cq_size": 0, 00:23:16.977 "rdma_srq_size": 0, 00:23:16.977 "reconnect_delay_sec": 0, 00:23:16.977 "timeout_admin_us": 0, 00:23:16.977 "timeout_us": 0, 00:23:16.977 "transport_ack_timeout": 0, 00:23:16.977 "transport_retry_count": 4, 00:23:16.977 "transport_tos": 0 00:23:16.977 } 00:23:16.977 }, 00:23:16.977 { 00:23:16.977 "method": "bdev_nvme_attach_controller", 00:23:16.977 "params": { 00:23:16.977 "adrfam": "IPv4", 00:23:16.977 "ctrlr_loss_timeout_sec": 0, 00:23:16.977 "ddgst": false, 00:23:16.977 "fast_io_fail_timeout_sec": 0, 00:23:16.977 "hdgst": false, 00:23:16.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.977 "name": "nvme0", 00:23:16.977 "prchk_guard": false, 00:23:16.977 "prchk_reftag": false, 00:23:16.977 "psk": "key0", 00:23:16.977 "reconnect_delay_sec": 0, 00:23:16.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.977 "traddr": "10.0.0.2", 00:23:16.977 "trsvcid": "4420", 00:23:16.977 "trtype": "TCP" 00:23:16.977 } 00:23:16.977 }, 00:23:16.977 { 00:23:16.977 "method": "bdev_nvme_set_hotplug", 00:23:16.977 "params": { 00:23:16.977 "enable": false, 00:23:16.977 "period_us": 100000 00:23:16.977 } 00:23:16.977 }, 00:23:16.977 { 00:23:16.977 "method": "bdev_enable_histogram", 00:23:16.977 "params": { 00:23:16.977 "enable": true, 00:23:16.977 "name": "nvme0n1" 00:23:16.977 } 00:23:16.977 }, 00:23:16.977 { 00:23:16.977 "method": "bdev_wait_for_examine" 00:23:16.977 } 00:23:16.977 ] 00:23:16.977 }, 00:23:16.977 { 00:23:16.977 "subsystem": "nbd", 00:23:16.977 "config": [] 00:23:16.977 } 00:23:16.977 ] 00:23:16.977 }' 00:23:16.977 15:10:32 -- target/tls.sh@266 -- # killprocess 78538 00:23:16.977 15:10:32 -- common/autotest_common.sh@936 -- # '[' -z 78538 ']' 00:23:16.977 15:10:32 -- common/autotest_common.sh@940 -- # kill -0 78538 00:23:16.977 15:10:32 -- common/autotest_common.sh@941 -- # uname 00:23:16.977 15:10:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:16.977 15:10:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78538 00:23:16.977 15:10:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:16.977 15:10:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:16.977 15:10:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78538' 00:23:16.977 killing process with pid 78538 00:23:16.977 15:10:32 -- common/autotest_common.sh@955 -- # kill 78538 00:23:16.977 Received shutdown signal, test time was about 1.000000 seconds 00:23:16.977 00:23:16.977 Latency(us) 00:23:16.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.977 =================================================================================================================== 00:23:16.977 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.977 15:10:32 -- common/autotest_common.sh@960 -- # wait 78538 00:23:17.236 15:10:32 -- target/tls.sh@267 -- # killprocess 78489 00:23:17.236 15:10:32 -- common/autotest_common.sh@936 -- # '[' -z 78489 ']' 00:23:17.236 15:10:32 -- common/autotest_common.sh@940 -- # kill -0 78489 00:23:17.236 15:10:32 -- common/autotest_common.sh@941 -- # uname 00:23:17.236 15:10:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:17.236 15:10:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78489 00:23:17.236 15:10:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:17.236 15:10:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:17.236 15:10:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78489' 00:23:17.236 killing process with pid 78489 00:23:17.236 15:10:32 -- common/autotest_common.sh@955 -- # kill 78489 00:23:17.236 15:10:32 -- common/autotest_common.sh@960 -- # wait 78489 00:23:17.495 15:10:33 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:17.495 15:10:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:17.495 15:10:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:17.495 15:10:33 -- target/tls.sh@269 -- # echo '{ 00:23:17.495 "subsystems": [ 00:23:17.495 { 00:23:17.495 "subsystem": "keyring", 00:23:17.495 "config": [ 00:23:17.495 { 00:23:17.495 "method": "keyring_file_add_key", 00:23:17.495 "params": { 00:23:17.495 "name": "key0", 00:23:17.495 "path": "/tmp/tmp.qFD9mpxRsg" 00:23:17.495 } 00:23:17.495 } 00:23:17.495 ] 00:23:17.495 }, 00:23:17.495 { 00:23:17.495 "subsystem": "iobuf", 00:23:17.495 "config": [ 00:23:17.495 { 00:23:17.495 "method": "iobuf_set_options", 00:23:17.495 "params": { 00:23:17.495 "large_bufsize": 135168, 00:23:17.495 "large_pool_count": 1024, 00:23:17.495 "small_bufsize": 8192, 00:23:17.495 "small_pool_count": 8192 00:23:17.495 } 00:23:17.495 } 00:23:17.495 ] 00:23:17.495 }, 00:23:17.495 { 00:23:17.495 "subsystem": "sock", 00:23:17.495 "config": [ 00:23:17.495 { 00:23:17.495 "method": "sock_impl_set_options", 00:23:17.495 "params": { 00:23:17.495 "enable_ktls": false, 00:23:17.495 "enable_placement_id": 0, 00:23:17.495 "enable_quickack": false, 00:23:17.495 "enable_recv_pipe": true, 00:23:17.495 "enable_zerocopy_send_client": false, 00:23:17.495 "enable_zerocopy_send_server": true, 00:23:17.495 "impl_name": "posix", 00:23:17.495 "recv_buf_size": 2097152, 00:23:17.495 "send_buf_size": 2097152, 00:23:17.495 "tls_version": 0, 00:23:17.495 "zerocopy_threshold": 0 00:23:17.495 } 00:23:17.495 }, 00:23:17.495 { 00:23:17.495 "method": "sock_impl_set_options", 00:23:17.495 "params": { 00:23:17.495 "enable_ktls": false, 00:23:17.495 "enable_placement_id": 0, 00:23:17.495 "enable_quickack": false, 00:23:17.495 "enable_recv_pipe": true, 00:23:17.495 "enable_zerocopy_send_client": false, 00:23:17.495 "enable_zerocopy_send_server": true, 00:23:17.495 "impl_name": "ssl", 00:23:17.495 "recv_buf_size": 4096, 00:23:17.495 "send_buf_size": 4096, 00:23:17.495 "tls_version": 0, 00:23:17.495 "zerocopy_threshold": 0 00:23:17.495 } 00:23:17.495 } 00:23:17.495 ] 00:23:17.495 }, 00:23:17.495 { 00:23:17.495 "subsystem": "vmd", 00:23:17.495 "config": [] 00:23:17.495 }, 00:23:17.495 { 00:23:17.495 "subsystem": "accel", 00:23:17.495 "config": [ 00:23:17.495 { 00:23:17.495 "method": "accel_set_options", 00:23:17.495 "params": { 00:23:17.495 "buf_count": 2048, 00:23:17.495 "large_cache_size": 16, 00:23:17.495 "sequence_count": 2048, 00:23:17.495 "small_cache_size": 128, 00:23:17.495 "task_count": 2048 00:23:17.495 } 00:23:17.495 } 00:23:17.495 ] 00:23:17.495 }, 00:23:17.495 { 00:23:17.495 "subsystem": "bdev", 00:23:17.495 "config": [ 00:23:17.495 { 00:23:17.495 "method": "bdev_set_options", 00:23:17.495 "params": { 00:23:17.495 "bdev_auto_examine": true, 00:23:17.495 "bdev_io_cache_size": 256, 00:23:17.495 "bdev_io_pool_size": 65535, 00:23:17.495 "iobuf_large_cache_size": 16, 00:23:17.495 "iobuf_small_cache_size": 128 00:23:17.495 } 00:23:17.495 }, 00:23:17.495 { 00:23:17.495 "method": "bdev_raid_set_options", 00:23:17.495 "params": { 00:23:17.495 "process_window_size_kb": 1024 00:23:17.495 } 00:23:17.495 }, 00:23:17.495 { 00:23:17.495 "method": "bdev_iscsi_set_options", 00:23:17.495 "params": { 00:23:17.495 "timeout_sec": 30 00:23:17.495 } 00:23:17.495 }, 00:23:17.495 { 00:23:17.495 "method": "bdev_nvme_set_options", 00:23:17.495 "params": { 00:23:17.495 "action_on_timeout": "none", 00:23:17.495 "allow_accel_sequence": false, 00:23:17.495 "arbitration_burst": 0, 00:23:17.495 "bdev_retry_count": 3, 00:23:17.495 "ctrlr_loss_timeout_sec": 0, 00:23:17.495 "delay_cmd_submit": true, 00:23:17.495 "dhchap_dhgroups": [ 00:23:17.495 "null", 00:23:17.495 "ffdhe2048", 00:23:17.495 "ffdhe3072", 00:23:17.495 "ffdhe4096", 00:23:17.495 "ffdhe6144", 00:23:17.495 "ffdhe8192" 00:23:17.495 ], 00:23:17.495 "dhchap_digests": [ 00:23:17.495 "sha256", 00:23:17.495 "sha384", 00:23:17.495 "sha512" 00:23:17.495 ], 00:23:17.495 "disable_auto_failback": false, 00:23:17.495 "fast_io_fail_timeout_sec": 0, 00:23:17.495 "generate_uuids": false, 00:23:17.495 "high_priority_weight": 0, 00:23:17.495 "io_path_stat": false, 00:23:17.495 "io_queue_requests": 0, 00:23:17.495 "keep_alive_timeout_ms": 10000, 00:23:17.495 "low_priority_weight": 0, 00:23:17.495 "medium_priority_weight": 0, 00:23:17.495 "nvme_adminq_poll_period_us": 10000, 00:23:17.495 "nvme_error_stat": false, 00:23:17.495 "nvme_ioq_poll_period_us": 0, 00:23:17.495 "rdma_cm_event_timeout_ms": 0, 00:23:17.495 "rdma_max_cq_size": 0, 00:23:17.495 "rdma_srq_size": 0, 00:23:17.495 "reconnect_delay_sec": 0, 00:23:17.495 "timeout_admin_us": 0, 00:23:17.495 "timeout_us": 0, 00:23:17.495 "transport_ack_timeout": 0, 00:23:17.495 "transport_retry_count": 4, 00:23:17.495 "transport_tos": 0 00:23:17.495 } 00:23:17.495 }, 00:23:17.495 { 00:23:17.495 "method": "bdev_nvme_set_hotplug", 00:23:17.495 "params": { 00:23:17.495 "enable": false, 00:23:17.495 "period_us": 100000 00:23:17.495 } 00:23:17.495 }, 00:23:17.495 { 00:23:17.495 "method": "bdev_malloc_create", 00:23:17.495 "params": { 00:23:17.495 "block_size": 4096, 00:23:17.495 "name": "malloc0", 00:23:17.495 "num_blocks": 8192, 00:23:17.495 "optimal_io_boundary": 0, 00:23:17.495 "physical_block_size": 4096, 00:23:17.495 "uuid": "52e3ed04-43ca-4f0b-a1f7-0a33640cd272" 00:23:17.495 } 00:23:17.495 }, 00:23:17.495 { 00:23:17.495 "method": "bdev_wait_for_examine" 00:23:17.495 } 00:23:17.495 ] 00:23:17.495 }, 00:23:17.495 { 00:23:17.495 "subsystem": "nbd", 00:23:17.495 "config": [] 00:23:17.496 }, 00:23:17.496 { 00:23:17.496 "subsystem": "scheduler", 00:23:17.496 "config": [ 00:23:17.496 { 00:23:17.496 "method": "framework_set_scheduler", 00:23:17.496 "params": { 00:23:17.496 "name": "static" 00:23:17.496 } 00:23:17.496 } 00:23:17.496 ] 00:23:17.496 }, 00:23:17.496 { 00:23:17.496 "subsystem": "nvmf", 00:23:17.496 "config": [ 00:23:17.496 { 00:23:17.496 "method": "nvmf_set_config", 00:23:17.496 "params": { 00:23:17.496 "admin_cmd_passthru": { 00:23:17.496 "identify_ctrlr": false 00:23:17.496 }, 00:23:17.496 "discovery_filter": "match_any" 00:23:17.496 } 00:23:17.496 }, 00:23:17.496 { 00:23:17.496 "method": "nvmf_set_max_subsystems", 00:23:17.496 "params": { 00:23:17.496 "max_subsystems": 1024 00:23:17.496 } 00:23:17.496 }, 00:23:17.496 { 00:23:17.496 "method": "nvmf_set_crdt", 00:23:17.496 "params": { 00:23:17.496 "crdt1": 0, 00:23:17.496 "crdt2": 0, 00:23:17.496 "crdt3": 0 00:23:17.496 } 00:23:17.496 }, 00:23:17.496 { 00:23:17.496 "method": "nvmf_create_transport", 00:23:17.496 "params": { 00:23:17.496 "abort_timeout_sec": 1, 00:23:17.496 "ack_timeout": 0, 00:23:17.496 "buf_cache_size": 4294967295, 00:23:17.496 "c2h_success": false, 00:23:17.496 "dif_insert_or_strip": false, 00:23:17.496 "in_capsule_data_size": 4096, 00:23:17.496 "io_unit_size": 131072, 00:23:17.496 "max_aq_depth": 128, 00:23:17.496 "max_io_qpairs_per_ctrlr": 127, 00:23:17.496 "max_io_size": 131072, 00:23:17.496 "max_queue_depth": 128, 00:23:17.496 "num_shared_buffers": 511, 00:23:17.496 "sock_priority": 0, 00:23:17.496 "trtype": "TCP", 00:23:17.496 "zcopy": false 00:23:17.496 } 00:23:17.496 }, 00:23:17.496 { 00:23:17.496 "method": "nvmf_create_subsystem", 00:23:17.496 "params": { 00:23:17.496 "allow_any_host": false, 00:23:17.496 "ana_reporting": false, 00:23:17.496 "max_cntlid": 65519, 00:23:17.496 "max_namespaces": 32, 00:23:17.496 "min_cntlid": 1, 00:23:17.496 "model_number": "SPDK bdev Controller", 00:23:17.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.496 "serial_number": "00000000000000000000" 00:23:17.496 } 00:23:17.496 }, 00:23:17.496 { 00:23:17.496 "method": "nvmf_subsystem_add_host", 00:23:17.496 "params": { 00:23:17.496 "host": "nqn.2016-06.io.spdk:host1", 00:23:17.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.496 "psk": "key0" 00:23:17.496 } 00:23:17.496 }, 00:23:17.496 { 00:23:17.496 "method": "nvmf_subsystem_add_ns", 00:23:17.496 "params": { 00:23:17.496 "namespace": { 00:23:17.496 "bdev_name": "malloc0", 00:23:17.496 "nguid": "52E3ED0443CA4F0BA1F70A33640CD272", 00:23:17.496 "no_auto_visible": false, 00:23:17.496 "nsid": 1, 00:23:17.496 "uuid": "52e3ed04-43ca-4f0b-a1f7-0a33640cd272" 00:23:17.496 }, 00:23:17.496 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:17.496 } 00:23:17.496 }, 00:23:17.496 { 00:23:17.496 "method": "nvmf_subsystem_add_listener", 00:23:17.496 "params": { 00:23:17.496 "listen_address": { 00:23:17.496 "adrfam": "IPv4", 00:23:17.496 "traddr": "10.0.0.2", 00:23:17.496 "trsvcid": "4420", 00:23:17.496 "trtype": "TCP" 00:23:17.496 }, 00:23:17.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.496 "secure_channel": true 00:23:17.496 } 00:23:17.496 } 00:23:17.496 ] 00:23:17.496 } 00:23:17.496 ] 00:23:17.496 }' 00:23:17.496 15:10:33 -- common/autotest_common.sh@10 -- # set +x 00:23:17.496 15:10:33 -- nvmf/common.sh@470 -- # nvmfpid=78624 00:23:17.496 15:10:33 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:17.496 15:10:33 -- nvmf/common.sh@471 -- # waitforlisten 78624 00:23:17.496 15:10:33 -- common/autotest_common.sh@817 -- # '[' -z 78624 ']' 00:23:17.496 15:10:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.496 15:10:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:17.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.496 15:10:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.496 15:10:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:17.496 15:10:33 -- common/autotest_common.sh@10 -- # set +x 00:23:17.496 [2024-04-18 15:10:33.097225] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:17.496 [2024-04-18 15:10:33.097311] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.756 [2024-04-18 15:10:33.238594] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.756 [2024-04-18 15:10:33.329274] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.756 [2024-04-18 15:10:33.329347] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.756 [2024-04-18 15:10:33.329365] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.756 [2024-04-18 15:10:33.329389] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.756 [2024-04-18 15:10:33.329399] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.756 [2024-04-18 15:10:33.329536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.015 [2024-04-18 15:10:33.539909] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.015 [2024-04-18 15:10:33.571798] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:18.015 [2024-04-18 15:10:33.572072] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.292 15:10:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:18.292 15:10:33 -- common/autotest_common.sh@850 -- # return 0 00:23:18.292 15:10:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:18.292 15:10:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:18.292 15:10:33 -- common/autotest_common.sh@10 -- # set +x 00:23:18.552 15:10:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.552 15:10:34 -- target/tls.sh@272 -- # bdevperf_pid=78668 00:23:18.552 15:10:34 -- target/tls.sh@273 -- # waitforlisten 78668 /var/tmp/bdevperf.sock 00:23:18.552 15:10:34 -- common/autotest_common.sh@817 -- # '[' -z 78668 ']' 00:23:18.552 15:10:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.552 15:10:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:18.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.552 15:10:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.552 15:10:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:18.552 15:10:34 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:18.552 15:10:34 -- common/autotest_common.sh@10 -- # set +x 00:23:18.552 15:10:34 -- target/tls.sh@270 -- # echo '{ 00:23:18.552 "subsystems": [ 00:23:18.552 { 00:23:18.552 "subsystem": "keyring", 00:23:18.552 "config": [ 00:23:18.552 { 00:23:18.552 "method": "keyring_file_add_key", 00:23:18.552 "params": { 00:23:18.552 "name": "key0", 00:23:18.552 "path": "/tmp/tmp.qFD9mpxRsg" 00:23:18.552 } 00:23:18.552 } 00:23:18.552 ] 00:23:18.552 }, 00:23:18.552 { 00:23:18.552 "subsystem": "iobuf", 00:23:18.552 "config": [ 00:23:18.552 { 00:23:18.552 "method": "iobuf_set_options", 00:23:18.552 "params": { 00:23:18.552 "large_bufsize": 135168, 00:23:18.552 "large_pool_count": 1024, 00:23:18.552 "small_bufsize": 8192, 00:23:18.552 "small_pool_count": 8192 00:23:18.552 } 00:23:18.552 } 00:23:18.552 ] 00:23:18.552 }, 00:23:18.552 { 00:23:18.552 "subsystem": "sock", 00:23:18.552 "config": [ 00:23:18.552 { 00:23:18.552 "method": "sock_impl_set_options", 00:23:18.552 "params": { 00:23:18.552 "enable_ktls": false, 00:23:18.552 "enable_placement_id": 0, 00:23:18.552 "enable_quickack": false, 00:23:18.552 "enable_recv_pipe": true, 00:23:18.552 "enable_zerocopy_send_client": false, 00:23:18.552 "enable_zerocopy_send_server": true, 00:23:18.552 "impl_name": "posix", 00:23:18.552 "recv_buf_size": 2097152, 00:23:18.552 "send_buf_size": 2097152, 00:23:18.552 "tls_version": 0, 00:23:18.552 "zerocopy_threshold": 0 00:23:18.552 } 00:23:18.552 }, 00:23:18.552 { 00:23:18.552 "method": "sock_impl_set_options", 00:23:18.552 "params": { 00:23:18.552 "enable_ktls": false, 00:23:18.552 "enable_placement_id": 0, 00:23:18.552 "enable_quickack": false, 00:23:18.552 "enable_recv_pipe": true, 00:23:18.552 "enable_zerocopy_send_client": false, 00:23:18.552 "enable_zerocopy_send_server": true, 00:23:18.552 "impl_name": "ssl", 00:23:18.552 "recv_buf_size": 4096, 00:23:18.552 "send_buf_size": 4096, 00:23:18.552 "tls_version": 0, 00:23:18.552 "zerocopy_threshold": 0 00:23:18.552 } 00:23:18.552 } 00:23:18.552 ] 00:23:18.552 }, 00:23:18.552 { 00:23:18.552 "subsystem": "vmd", 00:23:18.552 "config": [] 00:23:18.552 }, 00:23:18.552 { 00:23:18.552 "subsystem": "accel", 00:23:18.552 "config": [ 00:23:18.552 { 00:23:18.552 "method": "accel_set_options", 00:23:18.552 "params": { 00:23:18.552 "buf_count": 2048, 00:23:18.552 "large_cache_size": 16, 00:23:18.552 "sequence_count": 2048, 00:23:18.552 "small_cache_size": 128, 00:23:18.552 "task_count": 2048 00:23:18.552 } 00:23:18.552 } 00:23:18.552 ] 00:23:18.552 }, 00:23:18.552 { 00:23:18.552 "subsystem": "bdev", 00:23:18.552 "config": [ 00:23:18.552 { 00:23:18.552 "method": "bdev_set_options", 00:23:18.552 "params": { 00:23:18.552 "bdev_auto_examine": true, 00:23:18.552 "bdev_io_cache_size": 256, 00:23:18.552 "bdev_io_pool_size": 65535, 00:23:18.552 "iobuf_large_cache_size": 16, 00:23:18.552 "iobuf_small_cache_size": 128 00:23:18.552 } 00:23:18.552 }, 00:23:18.552 { 00:23:18.552 "method": "bdev_raid_set_options", 00:23:18.552 "params": { 00:23:18.552 "process_window_size_kb": 1024 00:23:18.552 } 00:23:18.552 }, 00:23:18.552 { 00:23:18.552 "method": "bdev_iscsi_set_options", 00:23:18.552 "params": { 00:23:18.552 "timeout_sec": 30 00:23:18.552 } 00:23:18.552 }, 00:23:18.552 { 00:23:18.552 "method": "bdev_nvme_set_options", 00:23:18.552 "params": { 00:23:18.552 "action_on_timeout": "none", 00:23:18.552 "allow_accel_sequence": false, 00:23:18.552 "arbitration_burst": 0, 00:23:18.552 "bdev_retry_count": 3, 00:23:18.552 "ctrlr_loss_timeout_sec": 0, 00:23:18.552 "delay_cmd_submit": true, 00:23:18.552 "dhchap_dhgroups": [ 00:23:18.552 "null", 00:23:18.552 "ffdhe2048", 00:23:18.552 "ffdhe3072", 00:23:18.552 "ffdhe4096", 00:23:18.552 "ffdhe6144", 00:23:18.552 "ffdhe8192" 00:23:18.552 ], 00:23:18.552 "dhchap_digests": [ 00:23:18.552 "sha256", 00:23:18.552 "sha384", 00:23:18.552 "sha512" 00:23:18.552 ], 00:23:18.552 "disable_auto_failback": false, 00:23:18.552 "fast_io_fail_timeout_sec": 0, 00:23:18.552 "generate_uuids": false, 00:23:18.552 "high_priority_weight": 0, 00:23:18.552 "io_path_stat": false, 00:23:18.552 "io_queue_requests": 512, 00:23:18.552 "keep_alive_timeout_ms": 10000, 00:23:18.552 "low_priority_weight": 0, 00:23:18.552 "medium_priority_weight": 0, 00:23:18.552 "nvme_adminq_poll_period_us": 10000, 00:23:18.552 "nvme_error_stat": false, 00:23:18.552 "nvme_ioq_poll_period_us": 0, 00:23:18.552 "rdma_cm_event_timeout_ms": 0, 00:23:18.552 "rdma_max_cq_size": 0, 00:23:18.552 "rdma_srq_size": 0, 00:23:18.552 "reconnect_delay_sec": 0, 00:23:18.552 "timeout_admin_us": 0, 00:23:18.552 "timeout_us": 0, 00:23:18.552 "transport_ack_timeout": 0, 00:23:18.552 "transport_retry_count": 4, 00:23:18.552 "transport_tos": 0 00:23:18.552 } 00:23:18.552 }, 00:23:18.552 { 00:23:18.552 "method": "bdev_nvme_attach_controller", 00:23:18.552 "params": { 00:23:18.552 "adrfam": "IPv4", 00:23:18.552 "ctrlr_loss_timeout_sec": 0, 00:23:18.552 "ddgst": false, 00:23:18.552 "fast_io_fail_timeout_sec": 0, 00:23:18.552 "hdgst": false, 00:23:18.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.552 "name": "nvme0", 00:23:18.552 "prchk_guard": false, 00:23:18.552 "prchk_reftag": false, 00:23:18.552 "psk": "key0", 00:23:18.552 "reconnect_delay_sec": 0, 00:23:18.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.552 "traddr": "10.0.0.2", 00:23:18.552 "trsvcid": "4420", 00:23:18.552 "trtype": "TCP" 00:23:18.552 } 00:23:18.552 }, 00:23:18.552 { 00:23:18.552 "method": "bdev_nvme_set_hotplug", 00:23:18.552 "params": { 00:23:18.552 "enable": false, 00:23:18.552 "period_us": 100000 00:23:18.552 } 00:23:18.552 }, 00:23:18.552 { 00:23:18.552 "method": "bdev_enable_histogram", 00:23:18.552 "params": { 00:23:18.552 "enable": true, 00:23:18.552 "name": "nvme0n1" 00:23:18.552 } 00:23:18.552 }, 00:23:18.552 { 00:23:18.552 "method": "bdev_wait_for_examine" 00:23:18.552 } 00:23:18.552 ] 00:23:18.552 }, 00:23:18.552 { 00:23:18.552 "subsystem": "nbd", 00:23:18.552 "config": [] 00:23:18.552 } 00:23:18.552 ] 00:23:18.552 }' 00:23:18.552 [2024-04-18 15:10:34.066343] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:18.552 [2024-04-18 15:10:34.066939] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78668 ] 00:23:18.552 [2024-04-18 15:10:34.199884] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.812 [2024-04-18 15:10:34.292876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.812 [2024-04-18 15:10:34.449393] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.380 15:10:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:19.380 15:10:34 -- common/autotest_common.sh@850 -- # return 0 00:23:19.380 15:10:34 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:19.380 15:10:34 -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:19.638 15:10:35 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.638 15:10:35 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:19.638 Running I/O for 1 seconds... 00:23:20.575 00:23:20.575 Latency(us) 00:23:20.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.575 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:20.575 Verification LBA range: start 0x0 length 0x2000 00:23:20.575 nvme0n1 : 1.01 5633.14 22.00 0.00 0.00 22545.47 4579.62 18107.94 00:23:20.575 =================================================================================================================== 00:23:20.575 Total : 5633.14 22.00 0.00 0.00 22545.47 4579.62 18107.94 00:23:20.575 0 00:23:20.575 15:10:36 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:20.575 15:10:36 -- target/tls.sh@279 -- # cleanup 00:23:20.575 15:10:36 -- target/tls.sh@15 -- # process_shm --id 0 00:23:20.575 15:10:36 -- common/autotest_common.sh@794 -- # type=--id 00:23:20.575 15:10:36 -- common/autotest_common.sh@795 -- # id=0 00:23:20.575 15:10:36 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:23:20.575 15:10:36 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:20.575 15:10:36 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:23:20.575 15:10:36 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:23:20.575 15:10:36 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:23:20.576 15:10:36 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:20.835 nvmf_trace.0 00:23:20.835 15:10:36 -- common/autotest_common.sh@809 -- # return 0 00:23:20.835 15:10:36 -- target/tls.sh@16 -- # killprocess 78668 00:23:20.835 15:10:36 -- common/autotest_common.sh@936 -- # '[' -z 78668 ']' 00:23:20.835 15:10:36 -- common/autotest_common.sh@940 -- # kill -0 78668 00:23:20.835 15:10:36 -- common/autotest_common.sh@941 -- # uname 00:23:20.835 15:10:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:20.835 15:10:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78668 00:23:20.835 15:10:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:20.835 15:10:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:20.835 killing process with pid 78668 00:23:20.835 15:10:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78668' 00:23:20.835 Received shutdown signal, test time was about 1.000000 seconds 00:23:20.835 00:23:20.835 Latency(us) 00:23:20.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.835 =================================================================================================================== 00:23:20.835 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:20.835 15:10:36 -- common/autotest_common.sh@955 -- # kill 78668 00:23:20.835 15:10:36 -- common/autotest_common.sh@960 -- # wait 78668 00:23:21.094 15:10:36 -- target/tls.sh@17 -- # nvmftestfini 00:23:21.094 15:10:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:21.094 15:10:36 -- nvmf/common.sh@117 -- # sync 00:23:21.094 15:10:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:21.094 15:10:36 -- nvmf/common.sh@120 -- # set +e 00:23:21.094 15:10:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:21.094 15:10:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:21.094 rmmod nvme_tcp 00:23:21.094 rmmod nvme_fabrics 00:23:21.094 rmmod nvme_keyring 00:23:21.094 15:10:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:21.094 15:10:36 -- nvmf/common.sh@124 -- # set -e 00:23:21.094 15:10:36 -- nvmf/common.sh@125 -- # return 0 00:23:21.094 15:10:36 -- nvmf/common.sh@478 -- # '[' -n 78624 ']' 00:23:21.094 15:10:36 -- nvmf/common.sh@479 -- # killprocess 78624 00:23:21.094 15:10:36 -- common/autotest_common.sh@936 -- # '[' -z 78624 ']' 00:23:21.094 15:10:36 -- common/autotest_common.sh@940 -- # kill -0 78624 00:23:21.094 15:10:36 -- common/autotest_common.sh@941 -- # uname 00:23:21.094 15:10:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:21.094 15:10:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78624 00:23:21.094 15:10:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:21.094 15:10:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:21.094 killing process with pid 78624 00:23:21.094 15:10:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78624' 00:23:21.094 15:10:36 -- common/autotest_common.sh@955 -- # kill 78624 00:23:21.094 15:10:36 -- common/autotest_common.sh@960 -- # wait 78624 00:23:21.354 15:10:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:21.354 15:10:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:21.354 15:10:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:21.354 15:10:36 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:21.354 15:10:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:21.354 15:10:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.354 15:10:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:21.354 15:10:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.354 15:10:37 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:21.354 15:10:37 -- target/tls.sh@18 -- # rm -f /tmp/tmp.MiXdgh5kIJ /tmp/tmp.dTvtOfWHm7 /tmp/tmp.qFD9mpxRsg 00:23:21.354 00:23:21.354 real 1m23.281s 00:23:21.354 user 2m5.302s 00:23:21.354 sys 0m30.878s 00:23:21.354 15:10:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:21.354 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:23:21.354 ************************************ 00:23:21.354 END TEST nvmf_tls 00:23:21.354 ************************************ 00:23:21.613 15:10:37 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:21.613 15:10:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:21.613 15:10:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:21.613 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:23:21.613 ************************************ 00:23:21.613 START TEST nvmf_fips 00:23:21.613 ************************************ 00:23:21.613 15:10:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:21.613 * Looking for test storage... 00:23:21.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:23:21.873 15:10:37 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:21.873 15:10:37 -- nvmf/common.sh@7 -- # uname -s 00:23:21.873 15:10:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.873 15:10:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.873 15:10:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.873 15:10:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.873 15:10:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.873 15:10:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.873 15:10:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.873 15:10:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.873 15:10:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.873 15:10:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.873 15:10:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:23:21.873 15:10:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:23:21.873 15:10:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.873 15:10:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.873 15:10:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:21.873 15:10:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.873 15:10:37 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:21.873 15:10:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.873 15:10:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.873 15:10:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.873 15:10:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.873 15:10:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.873 15:10:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.873 15:10:37 -- paths/export.sh@5 -- # export PATH 00:23:21.873 15:10:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.873 15:10:37 -- nvmf/common.sh@47 -- # : 0 00:23:21.873 15:10:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:21.873 15:10:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:21.873 15:10:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.873 15:10:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.873 15:10:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.873 15:10:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:21.873 15:10:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:21.873 15:10:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:21.873 15:10:37 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:21.873 15:10:37 -- fips/fips.sh@89 -- # check_openssl_version 00:23:21.873 15:10:37 -- fips/fips.sh@83 -- # local target=3.0.0 00:23:21.873 15:10:37 -- fips/fips.sh@85 -- # openssl version 00:23:21.873 15:10:37 -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:21.873 15:10:37 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:21.873 15:10:37 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:21.873 15:10:37 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:21.873 15:10:37 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:21.873 15:10:37 -- scripts/common.sh@333 -- # IFS=.-: 00:23:21.873 15:10:37 -- scripts/common.sh@333 -- # read -ra ver1 00:23:21.873 15:10:37 -- scripts/common.sh@334 -- # IFS=.-: 00:23:21.873 15:10:37 -- scripts/common.sh@334 -- # read -ra ver2 00:23:21.873 15:10:37 -- scripts/common.sh@335 -- # local 'op=>=' 00:23:21.873 15:10:37 -- scripts/common.sh@337 -- # ver1_l=3 00:23:21.873 15:10:37 -- scripts/common.sh@338 -- # ver2_l=3 00:23:21.873 15:10:37 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:21.873 15:10:37 -- scripts/common.sh@341 -- # case "$op" in 00:23:21.873 15:10:37 -- scripts/common.sh@345 -- # : 1 00:23:21.873 15:10:37 -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:21.873 15:10:37 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.873 15:10:37 -- scripts/common.sh@362 -- # decimal 3 00:23:21.873 15:10:37 -- scripts/common.sh@350 -- # local d=3 00:23:21.873 15:10:37 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:21.873 15:10:37 -- scripts/common.sh@352 -- # echo 3 00:23:21.873 15:10:37 -- scripts/common.sh@362 -- # ver1[v]=3 00:23:21.873 15:10:37 -- scripts/common.sh@363 -- # decimal 3 00:23:21.873 15:10:37 -- scripts/common.sh@350 -- # local d=3 00:23:21.873 15:10:37 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:21.873 15:10:37 -- scripts/common.sh@352 -- # echo 3 00:23:21.873 15:10:37 -- scripts/common.sh@363 -- # ver2[v]=3 00:23:21.873 15:10:37 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:21.873 15:10:37 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:21.873 15:10:37 -- scripts/common.sh@361 -- # (( v++ )) 00:23:21.873 15:10:37 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.873 15:10:37 -- scripts/common.sh@362 -- # decimal 0 00:23:21.873 15:10:37 -- scripts/common.sh@350 -- # local d=0 00:23:21.873 15:10:37 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:21.873 15:10:37 -- scripts/common.sh@352 -- # echo 0 00:23:21.873 15:10:37 -- scripts/common.sh@362 -- # ver1[v]=0 00:23:21.873 15:10:37 -- scripts/common.sh@363 -- # decimal 0 00:23:21.873 15:10:37 -- scripts/common.sh@350 -- # local d=0 00:23:21.873 15:10:37 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:21.873 15:10:37 -- scripts/common.sh@352 -- # echo 0 00:23:21.873 15:10:37 -- scripts/common.sh@363 -- # ver2[v]=0 00:23:21.873 15:10:37 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:21.873 15:10:37 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:21.873 15:10:37 -- scripts/common.sh@361 -- # (( v++ )) 00:23:21.873 15:10:37 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.873 15:10:37 -- scripts/common.sh@362 -- # decimal 9 00:23:21.873 15:10:37 -- scripts/common.sh@350 -- # local d=9 00:23:21.873 15:10:37 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:21.873 15:10:37 -- scripts/common.sh@352 -- # echo 9 00:23:21.874 15:10:37 -- scripts/common.sh@362 -- # ver1[v]=9 00:23:21.874 15:10:37 -- scripts/common.sh@363 -- # decimal 0 00:23:21.874 15:10:37 -- scripts/common.sh@350 -- # local d=0 00:23:21.874 15:10:37 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:21.874 15:10:37 -- scripts/common.sh@352 -- # echo 0 00:23:21.874 15:10:37 -- scripts/common.sh@363 -- # ver2[v]=0 00:23:21.874 15:10:37 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:21.874 15:10:37 -- scripts/common.sh@364 -- # return 0 00:23:21.874 15:10:37 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:21.874 15:10:37 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:21.874 15:10:37 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:21.874 15:10:37 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:21.874 15:10:37 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:21.874 15:10:37 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:21.874 15:10:37 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:21.874 15:10:37 -- fips/fips.sh@113 -- # build_openssl_config 00:23:21.874 15:10:37 -- fips/fips.sh@37 -- # cat 00:23:21.874 15:10:37 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:21.874 15:10:37 -- fips/fips.sh@58 -- # cat - 00:23:21.874 15:10:37 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:21.874 15:10:37 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:21.874 15:10:37 -- fips/fips.sh@116 -- # mapfile -t providers 00:23:21.874 15:10:37 -- fips/fips.sh@116 -- # openssl list -providers 00:23:21.874 15:10:37 -- fips/fips.sh@116 -- # grep name 00:23:21.874 15:10:37 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:21.874 15:10:37 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:21.874 15:10:37 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:21.874 15:10:37 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:21.874 15:10:37 -- common/autotest_common.sh@638 -- # local es=0 00:23:21.874 15:10:37 -- fips/fips.sh@127 -- # : 00:23:21.874 15:10:37 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:21.874 15:10:37 -- common/autotest_common.sh@626 -- # local arg=openssl 00:23:21.874 15:10:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:21.874 15:10:37 -- common/autotest_common.sh@630 -- # type -t openssl 00:23:21.874 15:10:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:21.874 15:10:37 -- common/autotest_common.sh@632 -- # type -P openssl 00:23:21.874 15:10:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:21.874 15:10:37 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:23:21.874 15:10:37 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:23:21.874 15:10:37 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:23:22.172 Error setting digest 00:23:22.172 00C21B6A7B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:22.172 00C21B6A7B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:22.172 15:10:37 -- common/autotest_common.sh@641 -- # es=1 00:23:22.172 15:10:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:22.172 15:10:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:22.172 15:10:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:22.172 15:10:37 -- fips/fips.sh@130 -- # nvmftestinit 00:23:22.172 15:10:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:22.172 15:10:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.172 15:10:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:22.172 15:10:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:22.172 15:10:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:22.172 15:10:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.172 15:10:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.172 15:10:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.172 15:10:37 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:22.172 15:10:37 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:22.172 15:10:37 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:22.172 15:10:37 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:22.172 15:10:37 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:22.172 15:10:37 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:22.172 15:10:37 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.172 15:10:37 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.172 15:10:37 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:22.172 15:10:37 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:22.172 15:10:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:22.172 15:10:37 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:22.172 15:10:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:22.172 15:10:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.172 15:10:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:22.172 15:10:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:22.172 15:10:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:22.172 15:10:37 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:22.172 15:10:37 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:22.172 15:10:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:22.172 Cannot find device "nvmf_tgt_br" 00:23:22.172 15:10:37 -- nvmf/common.sh@155 -- # true 00:23:22.172 15:10:37 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:22.172 Cannot find device "nvmf_tgt_br2" 00:23:22.172 15:10:37 -- nvmf/common.sh@156 -- # true 00:23:22.172 15:10:37 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:22.172 15:10:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:22.172 Cannot find device "nvmf_tgt_br" 00:23:22.172 15:10:37 -- nvmf/common.sh@158 -- # true 00:23:22.172 15:10:37 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:22.172 Cannot find device "nvmf_tgt_br2" 00:23:22.172 15:10:37 -- nvmf/common.sh@159 -- # true 00:23:22.172 15:10:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:22.172 15:10:37 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:22.172 15:10:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:22.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:22.172 15:10:37 -- nvmf/common.sh@162 -- # true 00:23:22.172 15:10:37 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:22.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:22.172 15:10:37 -- nvmf/common.sh@163 -- # true 00:23:22.172 15:10:37 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:22.172 15:10:37 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:22.172 15:10:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:22.172 15:10:37 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:22.172 15:10:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:22.172 15:10:37 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:22.172 15:10:37 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:22.172 15:10:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:22.172 15:10:37 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:22.172 15:10:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:22.172 15:10:37 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:22.172 15:10:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:22.172 15:10:37 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:22.172 15:10:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:22.172 15:10:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:22.172 15:10:37 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:22.172 15:10:37 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:22.172 15:10:37 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:22.172 15:10:37 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:22.431 15:10:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:22.431 15:10:37 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:22.431 15:10:37 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:22.431 15:10:37 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:22.431 15:10:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:22.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:23:22.431 00:23:22.431 --- 10.0.0.2 ping statistics --- 00:23:22.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.431 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:22.431 15:10:37 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:22.431 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:22.431 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:23:22.431 00:23:22.431 --- 10.0.0.3 ping statistics --- 00:23:22.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.431 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:23:22.431 15:10:37 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:22.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:23:22.432 00:23:22.432 --- 10.0.0.1 ping statistics --- 00:23:22.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.432 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:23:22.432 15:10:37 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.432 15:10:37 -- nvmf/common.sh@422 -- # return 0 00:23:22.432 15:10:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:22.432 15:10:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.432 15:10:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:22.432 15:10:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:22.432 15:10:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.432 15:10:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:22.432 15:10:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:22.432 15:10:37 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:22.432 15:10:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:22.432 15:10:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:22.432 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:23:22.432 15:10:37 -- nvmf/common.sh@470 -- # nvmfpid=78958 00:23:22.432 15:10:37 -- nvmf/common.sh@471 -- # waitforlisten 78958 00:23:22.432 15:10:37 -- common/autotest_common.sh@817 -- # '[' -z 78958 ']' 00:23:22.432 15:10:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.432 15:10:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:22.432 15:10:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.432 15:10:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:22.432 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:23:22.432 15:10:37 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:22.432 [2024-04-18 15:10:38.058732] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:22.432 [2024-04-18 15:10:38.058830] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.691 [2024-04-18 15:10:38.202479] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.691 [2024-04-18 15:10:38.289773] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.691 [2024-04-18 15:10:38.289837] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.691 [2024-04-18 15:10:38.289847] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.691 [2024-04-18 15:10:38.289856] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.691 [2024-04-18 15:10:38.289863] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.691 [2024-04-18 15:10:38.289902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.258 15:10:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:23.258 15:10:38 -- common/autotest_common.sh@850 -- # return 0 00:23:23.258 15:10:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:23.258 15:10:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:23.258 15:10:38 -- common/autotest_common.sh@10 -- # set +x 00:23:23.258 15:10:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.258 15:10:38 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:23.258 15:10:38 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:23.258 15:10:38 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:23.258 15:10:38 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:23.258 15:10:38 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:23.258 15:10:38 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:23.258 15:10:38 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:23.258 15:10:38 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.517 [2024-04-18 15:10:39.140675] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.517 [2024-04-18 15:10:39.156585] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:23.517 [2024-04-18 15:10:39.156815] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.517 [2024-04-18 15:10:39.189062] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:23.517 malloc0 00:23:23.776 15:10:39 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.776 15:10:39 -- fips/fips.sh@147 -- # bdevperf_pid=79010 00:23:23.776 15:10:39 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:23.776 15:10:39 -- fips/fips.sh@148 -- # waitforlisten 79010 /var/tmp/bdevperf.sock 00:23:23.776 15:10:39 -- common/autotest_common.sh@817 -- # '[' -z 79010 ']' 00:23:23.776 15:10:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.776 15:10:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:23.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.776 15:10:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.776 15:10:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:23.776 15:10:39 -- common/autotest_common.sh@10 -- # set +x 00:23:23.776 [2024-04-18 15:10:39.293303] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:23.776 [2024-04-18 15:10:39.293404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79010 ] 00:23:23.776 [2024-04-18 15:10:39.422493] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.035 [2024-04-18 15:10:39.515584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.604 15:10:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:24.604 15:10:40 -- common/autotest_common.sh@850 -- # return 0 00:23:24.604 15:10:40 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:24.864 [2024-04-18 15:10:40.393098] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.864 [2024-04-18 15:10:40.393217] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:24.864 TLSTESTn1 00:23:24.864 15:10:40 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:25.122 Running I/O for 10 seconds... 00:23:35.093 00:23:35.093 Latency(us) 00:23:35.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.093 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:35.093 Verification LBA range: start 0x0 length 0x2000 00:23:35.093 TLSTESTn1 : 10.01 5320.88 20.78 0.00 0.00 24019.66 4658.58 16634.04 00:23:35.093 =================================================================================================================== 00:23:35.093 Total : 5320.88 20.78 0.00 0.00 24019.66 4658.58 16634.04 00:23:35.093 0 00:23:35.093 15:10:50 -- fips/fips.sh@1 -- # cleanup 00:23:35.093 15:10:50 -- fips/fips.sh@15 -- # process_shm --id 0 00:23:35.093 15:10:50 -- common/autotest_common.sh@794 -- # type=--id 00:23:35.093 15:10:50 -- common/autotest_common.sh@795 -- # id=0 00:23:35.093 15:10:50 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:23:35.093 15:10:50 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:35.093 15:10:50 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:23:35.093 15:10:50 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:23:35.093 15:10:50 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:23:35.093 15:10:50 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:35.093 nvmf_trace.0 00:23:35.093 15:10:50 -- common/autotest_common.sh@809 -- # return 0 00:23:35.093 15:10:50 -- fips/fips.sh@16 -- # killprocess 79010 00:23:35.093 15:10:50 -- common/autotest_common.sh@936 -- # '[' -z 79010 ']' 00:23:35.093 15:10:50 -- common/autotest_common.sh@940 -- # kill -0 79010 00:23:35.093 15:10:50 -- common/autotest_common.sh@941 -- # uname 00:23:35.093 15:10:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:35.093 15:10:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79010 00:23:35.093 killing process with pid 79010 00:23:35.093 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.093 00:23:35.093 Latency(us) 00:23:35.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.093 =================================================================================================================== 00:23:35.093 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.093 15:10:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:35.093 15:10:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:35.093 15:10:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79010' 00:23:35.093 15:10:50 -- common/autotest_common.sh@955 -- # kill 79010 00:23:35.093 [2024-04-18 15:10:50.717791] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:35.093 15:10:50 -- common/autotest_common.sh@960 -- # wait 79010 00:23:35.351 15:10:50 -- fips/fips.sh@17 -- # nvmftestfini 00:23:35.351 15:10:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:35.351 15:10:50 -- nvmf/common.sh@117 -- # sync 00:23:35.351 15:10:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.351 15:10:50 -- nvmf/common.sh@120 -- # set +e 00:23:35.351 15:10:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.351 15:10:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:35.351 rmmod nvme_tcp 00:23:35.351 rmmod nvme_fabrics 00:23:35.351 rmmod nvme_keyring 00:23:35.351 15:10:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.609 15:10:51 -- nvmf/common.sh@124 -- # set -e 00:23:35.609 15:10:51 -- nvmf/common.sh@125 -- # return 0 00:23:35.609 15:10:51 -- nvmf/common.sh@478 -- # '[' -n 78958 ']' 00:23:35.609 15:10:51 -- nvmf/common.sh@479 -- # killprocess 78958 00:23:35.609 15:10:51 -- common/autotest_common.sh@936 -- # '[' -z 78958 ']' 00:23:35.609 15:10:51 -- common/autotest_common.sh@940 -- # kill -0 78958 00:23:35.609 15:10:51 -- common/autotest_common.sh@941 -- # uname 00:23:35.609 15:10:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:35.609 15:10:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78958 00:23:35.609 15:10:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:35.609 15:10:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:35.609 15:10:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78958' 00:23:35.609 killing process with pid 78958 00:23:35.609 15:10:51 -- common/autotest_common.sh@955 -- # kill 78958 00:23:35.609 [2024-04-18 15:10:51.104373] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:35.609 15:10:51 -- common/autotest_common.sh@960 -- # wait 78958 00:23:35.867 15:10:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:35.867 15:10:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:35.867 15:10:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:35.867 15:10:51 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:35.867 15:10:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:35.867 15:10:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.867 15:10:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.867 15:10:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.867 15:10:51 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:35.867 15:10:51 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:35.867 ************************************ 00:23:35.867 END TEST nvmf_fips 00:23:35.867 ************************************ 00:23:35.867 00:23:35.867 real 0m14.201s 00:23:35.867 user 0m18.363s 00:23:35.867 sys 0m6.172s 00:23:35.867 15:10:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:35.867 15:10:51 -- common/autotest_common.sh@10 -- # set +x 00:23:35.867 15:10:51 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:23:35.867 15:10:51 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:23:35.867 15:10:51 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:23:35.867 15:10:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:35.867 15:10:51 -- common/autotest_common.sh@10 -- # set +x 00:23:35.867 15:10:51 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:23:35.867 15:10:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:35.867 15:10:51 -- common/autotest_common.sh@10 -- # set +x 00:23:35.867 15:10:51 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:23:35.867 15:10:51 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:35.867 15:10:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:35.867 15:10:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:35.867 15:10:51 -- common/autotest_common.sh@10 -- # set +x 00:23:36.125 ************************************ 00:23:36.125 START TEST nvmf_multicontroller 00:23:36.125 ************************************ 00:23:36.125 15:10:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:36.125 * Looking for test storage... 00:23:36.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:36.125 15:10:51 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:36.125 15:10:51 -- nvmf/common.sh@7 -- # uname -s 00:23:36.125 15:10:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.125 15:10:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.125 15:10:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.125 15:10:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.125 15:10:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.125 15:10:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.125 15:10:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.125 15:10:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.125 15:10:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.125 15:10:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.125 15:10:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:23:36.125 15:10:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:23:36.125 15:10:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.125 15:10:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.125 15:10:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:36.125 15:10:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.125 15:10:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:36.125 15:10:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.125 15:10:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.125 15:10:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.125 15:10:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.125 15:10:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.125 15:10:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.125 15:10:51 -- paths/export.sh@5 -- # export PATH 00:23:36.126 15:10:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.126 15:10:51 -- nvmf/common.sh@47 -- # : 0 00:23:36.126 15:10:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:36.126 15:10:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:36.126 15:10:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.126 15:10:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.126 15:10:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.126 15:10:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:36.126 15:10:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:36.126 15:10:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:36.126 15:10:51 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:36.126 15:10:51 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:36.126 15:10:51 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:36.126 15:10:51 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:36.126 15:10:51 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:36.126 15:10:51 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:36.126 15:10:51 -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:36.126 15:10:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:36.126 15:10:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.126 15:10:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:36.126 15:10:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:36.126 15:10:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:36.126 15:10:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.126 15:10:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.126 15:10:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.126 15:10:51 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:36.126 15:10:51 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:36.126 15:10:51 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:36.126 15:10:51 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:36.126 15:10:51 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:36.126 15:10:51 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:36.126 15:10:51 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.126 15:10:51 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.126 15:10:51 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:36.126 15:10:51 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:36.126 15:10:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:36.126 15:10:51 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:36.126 15:10:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:36.126 15:10:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.126 15:10:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:36.126 15:10:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:36.126 15:10:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:36.126 15:10:51 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:36.126 15:10:51 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:36.126 15:10:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:36.126 Cannot find device "nvmf_tgt_br" 00:23:36.126 15:10:51 -- nvmf/common.sh@155 -- # true 00:23:36.126 15:10:51 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:36.126 Cannot find device "nvmf_tgt_br2" 00:23:36.126 15:10:51 -- nvmf/common.sh@156 -- # true 00:23:36.126 15:10:51 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:36.126 15:10:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:36.126 Cannot find device "nvmf_tgt_br" 00:23:36.126 15:10:51 -- nvmf/common.sh@158 -- # true 00:23:36.126 15:10:51 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:36.126 Cannot find device "nvmf_tgt_br2" 00:23:36.126 15:10:51 -- nvmf/common.sh@159 -- # true 00:23:36.126 15:10:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:36.383 15:10:51 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:36.383 15:10:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:36.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:36.383 15:10:51 -- nvmf/common.sh@162 -- # true 00:23:36.383 15:10:51 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:36.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:36.383 15:10:51 -- nvmf/common.sh@163 -- # true 00:23:36.383 15:10:51 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:36.383 15:10:51 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:36.383 15:10:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:36.383 15:10:51 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:36.383 15:10:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:36.383 15:10:51 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:36.383 15:10:51 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:36.383 15:10:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:36.383 15:10:51 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:36.383 15:10:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:36.383 15:10:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:36.383 15:10:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:36.383 15:10:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:36.383 15:10:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:36.383 15:10:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:36.383 15:10:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:36.383 15:10:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:36.383 15:10:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:36.383 15:10:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:36.383 15:10:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:36.383 15:10:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:36.641 15:10:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:36.641 15:10:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:36.641 15:10:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:36.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:23:36.641 00:23:36.641 --- 10.0.0.2 ping statistics --- 00:23:36.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.641 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:23:36.641 15:10:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:36.641 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:36.641 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:23:36.641 00:23:36.641 --- 10.0.0.3 ping statistics --- 00:23:36.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.641 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:23:36.641 15:10:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:36.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:23:36.641 00:23:36.641 --- 10.0.0.1 ping statistics --- 00:23:36.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.641 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:23:36.641 15:10:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.641 15:10:52 -- nvmf/common.sh@422 -- # return 0 00:23:36.641 15:10:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:36.641 15:10:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.641 15:10:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:36.641 15:10:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:36.641 15:10:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.641 15:10:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:36.641 15:10:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:36.641 15:10:52 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:36.641 15:10:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:36.641 15:10:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:36.641 15:10:52 -- common/autotest_common.sh@10 -- # set +x 00:23:36.641 15:10:52 -- nvmf/common.sh@470 -- # nvmfpid=79388 00:23:36.641 15:10:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:36.641 15:10:52 -- nvmf/common.sh@471 -- # waitforlisten 79388 00:23:36.641 15:10:52 -- common/autotest_common.sh@817 -- # '[' -z 79388 ']' 00:23:36.641 15:10:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.641 15:10:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:36.641 15:10:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.641 15:10:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:36.641 15:10:52 -- common/autotest_common.sh@10 -- # set +x 00:23:36.641 [2024-04-18 15:10:52.249901] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:36.641 [2024-04-18 15:10:52.250012] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.899 [2024-04-18 15:10:52.401025] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:36.899 [2024-04-18 15:10:52.499150] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.899 [2024-04-18 15:10:52.499218] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.899 [2024-04-18 15:10:52.499228] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.899 [2024-04-18 15:10:52.499237] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.899 [2024-04-18 15:10:52.499245] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.899 [2024-04-18 15:10:52.499408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.899 [2024-04-18 15:10:52.500314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:36.899 [2024-04-18 15:10:52.500316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.464 15:10:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:37.464 15:10:53 -- common/autotest_common.sh@850 -- # return 0 00:23:37.464 15:10:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:37.464 15:10:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:37.464 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:23:37.721 15:10:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.721 15:10:53 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:37.721 15:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.721 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:23:37.721 [2024-04-18 15:10:53.187485] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.721 15:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.721 15:10:53 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:37.721 15:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.721 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:23:37.721 Malloc0 00:23:37.721 15:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.721 15:10:53 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:37.721 15:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.721 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:23:37.721 15:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.721 15:10:53 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:37.721 15:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.721 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:23:37.721 15:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.721 15:10:53 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:37.721 15:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.721 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:23:37.721 [2024-04-18 15:10:53.252751] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.721 15:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.721 15:10:53 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:37.721 15:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.721 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:23:37.722 [2024-04-18 15:10:53.264743] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:37.722 15:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.722 15:10:53 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:37.722 15:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.722 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:23:37.722 Malloc1 00:23:37.722 15:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.722 15:10:53 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:37.722 15:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.722 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:23:37.722 15:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.722 15:10:53 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:37.722 15:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.722 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:23:37.722 15:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.722 15:10:53 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:37.722 15:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.722 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:23:37.722 15:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.722 15:10:53 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:37.722 15:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.722 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:23:37.722 15:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.722 15:10:53 -- host/multicontroller.sh@44 -- # bdevperf_pid=79440 00:23:37.722 15:10:53 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:37.722 15:10:53 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:37.722 15:10:53 -- host/multicontroller.sh@47 -- # waitforlisten 79440 /var/tmp/bdevperf.sock 00:23:37.722 15:10:53 -- common/autotest_common.sh@817 -- # '[' -z 79440 ']' 00:23:37.722 15:10:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.722 15:10:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:37.722 15:10:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.722 15:10:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:37.722 15:10:53 -- common/autotest_common.sh@10 -- # set +x 00:23:38.709 15:10:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:38.709 15:10:54 -- common/autotest_common.sh@850 -- # return 0 00:23:38.710 15:10:54 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:38.710 15:10:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.710 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:23:38.710 NVMe0n1 00:23:38.710 15:10:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.710 15:10:54 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:38.710 15:10:54 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:38.710 15:10:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.710 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:23:38.710 15:10:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.710 1 00:23:38.710 15:10:54 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:38.710 15:10:54 -- common/autotest_common.sh@638 -- # local es=0 00:23:38.710 15:10:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:38.710 15:10:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:38.710 15:10:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:38.710 15:10:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:38.710 15:10:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:38.710 15:10:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:38.710 15:10:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.710 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:23:38.710 2024/04/18 15:10:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:38.710 request: 00:23:38.710 { 00:23:38.710 "method": "bdev_nvme_attach_controller", 00:23:38.710 "params": { 00:23:38.710 "name": "NVMe0", 00:23:38.710 "trtype": "tcp", 00:23:38.710 "traddr": "10.0.0.2", 00:23:38.710 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:38.710 "hostaddr": "10.0.0.2", 00:23:38.710 "hostsvcid": "60000", 00:23:38.710 "adrfam": "ipv4", 00:23:38.710 "trsvcid": "4420", 00:23:38.710 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:23:38.710 } 00:23:38.710 } 00:23:38.710 Got JSON-RPC error response 00:23:38.710 GoRPCClient: error on JSON-RPC call 00:23:38.710 15:10:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:38.972 15:10:54 -- common/autotest_common.sh@641 -- # es=1 00:23:38.972 15:10:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:38.972 15:10:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:38.972 15:10:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:38.972 15:10:54 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:38.972 15:10:54 -- common/autotest_common.sh@638 -- # local es=0 00:23:38.972 15:10:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:38.972 15:10:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:38.972 15:10:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:38.972 15:10:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:38.972 15:10:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:38.972 15:10:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:38.972 15:10:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.972 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:23:38.972 2024/04/18 15:10:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:38.972 request: 00:23:38.972 { 00:23:38.972 "method": "bdev_nvme_attach_controller", 00:23:38.972 "params": { 00:23:38.972 "name": "NVMe0", 00:23:38.972 "trtype": "tcp", 00:23:38.972 "traddr": "10.0.0.2", 00:23:38.972 "hostaddr": "10.0.0.2", 00:23:38.972 "hostsvcid": "60000", 00:23:38.972 "adrfam": "ipv4", 00:23:38.972 "trsvcid": "4420", 00:23:38.972 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:23:38.972 } 00:23:38.972 } 00:23:38.972 Got JSON-RPC error response 00:23:38.972 GoRPCClient: error on JSON-RPC call 00:23:38.972 15:10:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:38.972 15:10:54 -- common/autotest_common.sh@641 -- # es=1 00:23:38.972 15:10:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:38.972 15:10:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:38.972 15:10:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:38.972 15:10:54 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:38.972 15:10:54 -- common/autotest_common.sh@638 -- # local es=0 00:23:38.972 15:10:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:38.972 15:10:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:38.972 15:10:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:38.972 15:10:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:38.972 15:10:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:38.972 15:10:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:38.972 15:10:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.972 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:23:38.972 2024/04/18 15:10:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:23:38.972 request: 00:23:38.972 { 00:23:38.972 "method": "bdev_nvme_attach_controller", 00:23:38.972 "params": { 00:23:38.972 "name": "NVMe0", 00:23:38.972 "trtype": "tcp", 00:23:38.972 "traddr": "10.0.0.2", 00:23:38.972 "hostaddr": "10.0.0.2", 00:23:38.972 "hostsvcid": "60000", 00:23:38.972 "adrfam": "ipv4", 00:23:38.972 "trsvcid": "4420", 00:23:38.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.972 "multipath": "disable" 00:23:38.972 } 00:23:38.972 } 00:23:38.972 Got JSON-RPC error response 00:23:38.972 GoRPCClient: error on JSON-RPC call 00:23:38.972 15:10:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:38.972 15:10:54 -- common/autotest_common.sh@641 -- # es=1 00:23:38.972 15:10:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:38.972 15:10:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:38.972 15:10:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:38.972 15:10:54 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:38.972 15:10:54 -- common/autotest_common.sh@638 -- # local es=0 00:23:38.972 15:10:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:38.972 15:10:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:38.972 15:10:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:38.972 15:10:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:38.973 15:10:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:38.973 15:10:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:38.973 15:10:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.973 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:23:38.973 2024/04/18 15:10:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:38.973 request: 00:23:38.973 { 00:23:38.973 "method": "bdev_nvme_attach_controller", 00:23:38.973 "params": { 00:23:38.973 "name": "NVMe0", 00:23:38.973 "trtype": "tcp", 00:23:38.973 "traddr": "10.0.0.2", 00:23:38.973 "hostaddr": "10.0.0.2", 00:23:38.973 "hostsvcid": "60000", 00:23:38.973 "adrfam": "ipv4", 00:23:38.973 "trsvcid": "4420", 00:23:38.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.973 "multipath": "failover" 00:23:38.973 } 00:23:38.973 } 00:23:38.973 Got JSON-RPC error response 00:23:38.973 GoRPCClient: error on JSON-RPC call 00:23:38.973 15:10:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:38.973 15:10:54 -- common/autotest_common.sh@641 -- # es=1 00:23:38.973 15:10:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:38.973 15:10:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:38.973 15:10:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:38.973 15:10:54 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:38.973 15:10:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.973 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:23:38.973 00:23:38.973 15:10:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.973 15:10:54 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:38.973 15:10:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.973 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:23:38.973 15:10:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.973 15:10:54 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:38.973 15:10:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.973 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:23:38.973 00:23:38.973 15:10:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.973 15:10:54 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:38.973 15:10:54 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:38.973 15:10:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:38.973 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:23:38.973 15:10:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:38.973 15:10:54 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:38.973 15:10:54 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:40.349 0 00:23:40.349 15:10:55 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:40.349 15:10:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.349 15:10:55 -- common/autotest_common.sh@10 -- # set +x 00:23:40.349 15:10:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.349 15:10:55 -- host/multicontroller.sh@100 -- # killprocess 79440 00:23:40.349 15:10:55 -- common/autotest_common.sh@936 -- # '[' -z 79440 ']' 00:23:40.349 15:10:55 -- common/autotest_common.sh@940 -- # kill -0 79440 00:23:40.349 15:10:55 -- common/autotest_common.sh@941 -- # uname 00:23:40.349 15:10:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:40.349 15:10:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79440 00:23:40.349 15:10:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:40.349 15:10:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:40.349 killing process with pid 79440 00:23:40.349 15:10:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79440' 00:23:40.349 15:10:55 -- common/autotest_common.sh@955 -- # kill 79440 00:23:40.349 15:10:55 -- common/autotest_common.sh@960 -- # wait 79440 00:23:40.608 15:10:56 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:40.608 15:10:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.608 15:10:56 -- common/autotest_common.sh@10 -- # set +x 00:23:40.608 15:10:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.608 15:10:56 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:40.608 15:10:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.608 15:10:56 -- common/autotest_common.sh@10 -- # set +x 00:23:40.608 15:10:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.608 15:10:56 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:40.608 15:10:56 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:40.608 15:10:56 -- common/autotest_common.sh@1598 -- # read -r file 00:23:40.608 15:10:56 -- common/autotest_common.sh@1597 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:23:40.608 15:10:56 -- common/autotest_common.sh@1597 -- # sort -u 00:23:40.608 15:10:56 -- common/autotest_common.sh@1599 -- # cat 00:23:40.608 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:23:40.608 [2024-04-18 15:10:53.392159] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:40.608 [2024-04-18 15:10:53.392255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79440 ] 00:23:40.608 [2024-04-18 15:10:53.535683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.608 [2024-04-18 15:10:53.641107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.608 [2024-04-18 15:10:54.631114] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 5ae43ebf-feaf-4be4-b7f4-68f95fb72b8e already exists 00:23:40.608 [2024-04-18 15:10:54.631193] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:5ae43ebf-feaf-4be4-b7f4-68f95fb72b8e alias for bdev NVMe1n1 00:23:40.608 [2024-04-18 15:10:54.631213] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:40.608 Running I/O for 1 seconds... 00:23:40.608 00:23:40.608 Latency(us) 00:23:40.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.608 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:40.608 NVMe0n1 : 1.00 24778.66 96.79 0.00 0.00 5159.07 1723.94 9159.25 00:23:40.608 =================================================================================================================== 00:23:40.608 Total : 24778.66 96.79 0.00 0.00 5159.07 1723.94 9159.25 00:23:40.608 Received shutdown signal, test time was about 1.000000 seconds 00:23:40.608 00:23:40.608 Latency(us) 00:23:40.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.608 =================================================================================================================== 00:23:40.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.608 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:23:40.608 15:10:56 -- common/autotest_common.sh@1604 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:40.608 15:10:56 -- common/autotest_common.sh@1598 -- # read -r file 00:23:40.608 15:10:56 -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:40.608 15:10:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:40.608 15:10:56 -- nvmf/common.sh@117 -- # sync 00:23:40.608 15:10:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.608 15:10:56 -- nvmf/common.sh@120 -- # set +e 00:23:40.608 15:10:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.608 15:10:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.608 rmmod nvme_tcp 00:23:40.608 rmmod nvme_fabrics 00:23:40.608 rmmod nvme_keyring 00:23:40.608 15:10:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.608 15:10:56 -- nvmf/common.sh@124 -- # set -e 00:23:40.608 15:10:56 -- nvmf/common.sh@125 -- # return 0 00:23:40.608 15:10:56 -- nvmf/common.sh@478 -- # '[' -n 79388 ']' 00:23:40.608 15:10:56 -- nvmf/common.sh@479 -- # killprocess 79388 00:23:40.608 15:10:56 -- common/autotest_common.sh@936 -- # '[' -z 79388 ']' 00:23:40.608 15:10:56 -- common/autotest_common.sh@940 -- # kill -0 79388 00:23:40.608 15:10:56 -- common/autotest_common.sh@941 -- # uname 00:23:40.608 15:10:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:40.608 15:10:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79388 00:23:40.608 15:10:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:40.608 15:10:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:40.608 killing process with pid 79388 00:23:40.608 15:10:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79388' 00:23:40.608 15:10:56 -- common/autotest_common.sh@955 -- # kill 79388 00:23:40.608 15:10:56 -- common/autotest_common.sh@960 -- # wait 79388 00:23:40.867 15:10:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:40.867 15:10:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:40.867 15:10:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:40.867 15:10:56 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.867 15:10:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:40.867 15:10:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.867 15:10:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.867 15:10:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.125 15:10:56 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:41.125 00:23:41.125 real 0m5.036s 00:23:41.125 user 0m15.144s 00:23:41.125 sys 0m1.307s 00:23:41.125 15:10:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:41.125 ************************************ 00:23:41.125 END TEST nvmf_multicontroller 00:23:41.125 ************************************ 00:23:41.125 15:10:56 -- common/autotest_common.sh@10 -- # set +x 00:23:41.125 15:10:56 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:41.125 15:10:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:41.125 15:10:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:41.125 15:10:56 -- common/autotest_common.sh@10 -- # set +x 00:23:41.125 ************************************ 00:23:41.125 START TEST nvmf_aer 00:23:41.125 ************************************ 00:23:41.125 15:10:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:41.384 * Looking for test storage... 00:23:41.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:41.384 15:10:56 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:41.384 15:10:56 -- nvmf/common.sh@7 -- # uname -s 00:23:41.384 15:10:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.384 15:10:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.384 15:10:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.384 15:10:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.384 15:10:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.384 15:10:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.384 15:10:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.384 15:10:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.384 15:10:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.384 15:10:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.384 15:10:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:23:41.384 15:10:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:23:41.384 15:10:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.384 15:10:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.384 15:10:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:41.384 15:10:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.384 15:10:56 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:41.384 15:10:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.384 15:10:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.384 15:10:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.384 15:10:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.384 15:10:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.384 15:10:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.384 15:10:56 -- paths/export.sh@5 -- # export PATH 00:23:41.384 15:10:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.384 15:10:56 -- nvmf/common.sh@47 -- # : 0 00:23:41.384 15:10:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:41.384 15:10:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:41.384 15:10:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.384 15:10:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.384 15:10:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.384 15:10:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:41.384 15:10:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:41.384 15:10:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:41.384 15:10:56 -- host/aer.sh@11 -- # nvmftestinit 00:23:41.384 15:10:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:41.384 15:10:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.384 15:10:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:41.384 15:10:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:41.384 15:10:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:41.384 15:10:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.384 15:10:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.384 15:10:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.384 15:10:56 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:41.384 15:10:56 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:41.384 15:10:56 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:41.384 15:10:56 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:41.384 15:10:56 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:41.384 15:10:56 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:41.384 15:10:56 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.384 15:10:56 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.384 15:10:56 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:41.384 15:10:56 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:41.384 15:10:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:41.384 15:10:56 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:41.384 15:10:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:41.384 15:10:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.384 15:10:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:41.384 15:10:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:41.384 15:10:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:41.384 15:10:56 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:41.384 15:10:56 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:41.384 15:10:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:41.384 Cannot find device "nvmf_tgt_br" 00:23:41.384 15:10:57 -- nvmf/common.sh@155 -- # true 00:23:41.384 15:10:57 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:41.384 Cannot find device "nvmf_tgt_br2" 00:23:41.384 15:10:57 -- nvmf/common.sh@156 -- # true 00:23:41.384 15:10:57 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:41.384 15:10:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:41.384 Cannot find device "nvmf_tgt_br" 00:23:41.384 15:10:57 -- nvmf/common.sh@158 -- # true 00:23:41.384 15:10:57 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:41.384 Cannot find device "nvmf_tgt_br2" 00:23:41.385 15:10:57 -- nvmf/common.sh@159 -- # true 00:23:41.385 15:10:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:41.643 15:10:57 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:41.643 15:10:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:41.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:41.643 15:10:57 -- nvmf/common.sh@162 -- # true 00:23:41.643 15:10:57 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:41.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:41.643 15:10:57 -- nvmf/common.sh@163 -- # true 00:23:41.643 15:10:57 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:41.643 15:10:57 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:41.643 15:10:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:41.643 15:10:57 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:41.643 15:10:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:41.643 15:10:57 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:41.643 15:10:57 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:41.643 15:10:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:41.643 15:10:57 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:41.643 15:10:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:41.643 15:10:57 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:41.643 15:10:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:41.643 15:10:57 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:41.643 15:10:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:41.643 15:10:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:41.643 15:10:57 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:41.643 15:10:57 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:41.643 15:10:57 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:41.643 15:10:57 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:41.643 15:10:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:41.643 15:10:57 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:41.902 15:10:57 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:41.902 15:10:57 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:41.902 15:10:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:41.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:23:41.902 00:23:41.902 --- 10.0.0.2 ping statistics --- 00:23:41.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.902 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:41.902 15:10:57 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:41.902 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:41.902 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:23:41.902 00:23:41.902 --- 10.0.0.3 ping statistics --- 00:23:41.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.902 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:23:41.902 15:10:57 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:41.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:23:41.902 00:23:41.902 --- 10.0.0.1 ping statistics --- 00:23:41.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.902 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:23:41.902 15:10:57 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.902 15:10:57 -- nvmf/common.sh@422 -- # return 0 00:23:41.902 15:10:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:41.902 15:10:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.902 15:10:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:41.902 15:10:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:41.902 15:10:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.902 15:10:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:41.902 15:10:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:41.902 15:10:57 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:41.902 15:10:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:41.902 15:10:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:41.902 15:10:57 -- common/autotest_common.sh@10 -- # set +x 00:23:41.902 15:10:57 -- nvmf/common.sh@470 -- # nvmfpid=79686 00:23:41.902 15:10:57 -- nvmf/common.sh@471 -- # waitforlisten 79686 00:23:41.902 15:10:57 -- common/autotest_common.sh@817 -- # '[' -z 79686 ']' 00:23:41.902 15:10:57 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:41.902 15:10:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.902 15:10:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:41.902 15:10:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.902 15:10:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:41.902 15:10:57 -- common/autotest_common.sh@10 -- # set +x 00:23:41.902 [2024-04-18 15:10:57.472055] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:41.902 [2024-04-18 15:10:57.472184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.161 [2024-04-18 15:10:57.618632] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:42.161 [2024-04-18 15:10:57.718315] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.161 [2024-04-18 15:10:57.718376] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.161 [2024-04-18 15:10:57.718386] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.161 [2024-04-18 15:10:57.718396] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.161 [2024-04-18 15:10:57.718404] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.161 [2024-04-18 15:10:57.718500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.161 [2024-04-18 15:10:57.718631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.161 [2024-04-18 15:10:57.719503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.161 [2024-04-18 15:10:57.719504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.728 15:10:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:42.728 15:10:58 -- common/autotest_common.sh@850 -- # return 0 00:23:42.728 15:10:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:42.728 15:10:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:42.728 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:23:42.728 15:10:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.728 15:10:58 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:42.728 15:10:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.728 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:23:42.728 [2024-04-18 15:10:58.427799] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.987 15:10:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.987 15:10:58 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:42.988 15:10:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.988 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:23:42.988 Malloc0 00:23:42.988 15:10:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.988 15:10:58 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:42.988 15:10:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.988 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:23:42.988 15:10:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.988 15:10:58 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:42.988 15:10:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.988 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:23:42.988 15:10:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.988 15:10:58 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:42.988 15:10:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.988 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:23:42.988 [2024-04-18 15:10:58.501008] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.988 15:10:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.988 15:10:58 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:42.988 15:10:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.988 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:23:42.988 [2024-04-18 15:10:58.512751] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:23:42.988 [ 00:23:42.988 { 00:23:42.988 "allow_any_host": true, 00:23:42.988 "hosts": [], 00:23:42.988 "listen_addresses": [], 00:23:42.988 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:42.988 "subtype": "Discovery" 00:23:42.988 }, 00:23:42.988 { 00:23:42.988 "allow_any_host": true, 00:23:42.988 "hosts": [], 00:23:42.988 "listen_addresses": [ 00:23:42.988 { 00:23:42.988 "adrfam": "IPv4", 00:23:42.988 "traddr": "10.0.0.2", 00:23:42.988 "transport": "TCP", 00:23:42.988 "trsvcid": "4420", 00:23:42.988 "trtype": "TCP" 00:23:42.988 } 00:23:42.988 ], 00:23:42.988 "max_cntlid": 65519, 00:23:42.988 "max_namespaces": 2, 00:23:42.988 "min_cntlid": 1, 00:23:42.988 "model_number": "SPDK bdev Controller", 00:23:42.988 "namespaces": [ 00:23:42.988 { 00:23:42.988 "bdev_name": "Malloc0", 00:23:42.988 "name": "Malloc0", 00:23:42.988 "nguid": "8C7D6098C61048D9AD3DCF4FB49BC189", 00:23:42.988 "nsid": 1, 00:23:42.988 "uuid": "8c7d6098-c610-48d9-ad3d-cf4fb49bc189" 00:23:42.988 } 00:23:42.988 ], 00:23:42.988 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.988 "serial_number": "SPDK00000000000001", 00:23:42.988 "subtype": "NVMe" 00:23:42.988 } 00:23:42.988 ] 00:23:42.988 15:10:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:42.988 15:10:58 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:42.988 15:10:58 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:42.988 15:10:58 -- host/aer.sh@33 -- # aerpid=79746 00:23:42.988 15:10:58 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:42.988 15:10:58 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:42.988 15:10:58 -- common/autotest_common.sh@1251 -- # local i=0 00:23:42.988 15:10:58 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:42.988 15:10:58 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:23:42.988 15:10:58 -- common/autotest_common.sh@1254 -- # i=1 00:23:42.988 15:10:58 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:23:42.988 15:10:58 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:42.988 15:10:58 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:23:42.988 15:10:58 -- common/autotest_common.sh@1254 -- # i=2 00:23:42.988 15:10:58 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:23:43.247 15:10:58 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:43.247 15:10:58 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:43.247 15:10:58 -- common/autotest_common.sh@1262 -- # return 0 00:23:43.247 15:10:58 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:43.247 15:10:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.247 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:23:43.247 Malloc1 00:23:43.247 15:10:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.247 15:10:58 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:43.247 15:10:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.247 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:23:43.247 15:10:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.247 15:10:58 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:43.247 15:10:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.247 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:23:43.247 Asynchronous Event Request test 00:23:43.247 Attaching to 10.0.0.2 00:23:43.247 Attached to 10.0.0.2 00:23:43.247 Registering asynchronous event callbacks... 00:23:43.247 Starting namespace attribute notice tests for all controllers... 00:23:43.247 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:43.247 aer_cb - Changed Namespace 00:23:43.247 Cleaning up... 00:23:43.247 [ 00:23:43.247 { 00:23:43.247 "allow_any_host": true, 00:23:43.247 "hosts": [], 00:23:43.247 "listen_addresses": [], 00:23:43.247 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:43.247 "subtype": "Discovery" 00:23:43.247 }, 00:23:43.247 { 00:23:43.247 "allow_any_host": true, 00:23:43.247 "hosts": [], 00:23:43.247 "listen_addresses": [ 00:23:43.247 { 00:23:43.247 "adrfam": "IPv4", 00:23:43.247 "traddr": "10.0.0.2", 00:23:43.247 "transport": "TCP", 00:23:43.247 "trsvcid": "4420", 00:23:43.247 "trtype": "TCP" 00:23:43.247 } 00:23:43.247 ], 00:23:43.247 "max_cntlid": 65519, 00:23:43.247 "max_namespaces": 2, 00:23:43.247 "min_cntlid": 1, 00:23:43.247 "model_number": "SPDK bdev Controller", 00:23:43.247 "namespaces": [ 00:23:43.247 { 00:23:43.247 "bdev_name": "Malloc0", 00:23:43.247 "name": "Malloc0", 00:23:43.247 "nguid": "8C7D6098C61048D9AD3DCF4FB49BC189", 00:23:43.247 "nsid": 1, 00:23:43.247 "uuid": "8c7d6098-c610-48d9-ad3d-cf4fb49bc189" 00:23:43.247 }, 00:23:43.247 { 00:23:43.247 "bdev_name": "Malloc1", 00:23:43.247 "name": "Malloc1", 00:23:43.247 "nguid": "30EE1B794F3B42B5BB71F01673B3EB96", 00:23:43.247 "nsid": 2, 00:23:43.247 "uuid": "30ee1b79-4f3b-42b5-bb71-f01673b3eb96" 00:23:43.247 } 00:23:43.247 ], 00:23:43.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.247 "serial_number": "SPDK00000000000001", 00:23:43.247 "subtype": "NVMe" 00:23:43.247 } 00:23:43.248 ] 00:23:43.248 15:10:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.248 15:10:58 -- host/aer.sh@43 -- # wait 79746 00:23:43.248 15:10:58 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:43.248 15:10:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.248 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:23:43.248 15:10:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.248 15:10:58 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:43.248 15:10:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.248 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:23:43.248 15:10:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.248 15:10:58 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:43.248 15:10:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.248 15:10:58 -- common/autotest_common.sh@10 -- # set +x 00:23:43.248 15:10:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.248 15:10:58 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:43.248 15:10:58 -- host/aer.sh@51 -- # nvmftestfini 00:23:43.248 15:10:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:43.248 15:10:58 -- nvmf/common.sh@117 -- # sync 00:23:43.506 15:10:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:43.506 15:10:58 -- nvmf/common.sh@120 -- # set +e 00:23:43.506 15:10:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:43.506 15:10:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:43.506 rmmod nvme_tcp 00:23:43.506 rmmod nvme_fabrics 00:23:43.506 rmmod nvme_keyring 00:23:43.506 15:10:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:43.506 15:10:59 -- nvmf/common.sh@124 -- # set -e 00:23:43.506 15:10:59 -- nvmf/common.sh@125 -- # return 0 00:23:43.506 15:10:59 -- nvmf/common.sh@478 -- # '[' -n 79686 ']' 00:23:43.506 15:10:59 -- nvmf/common.sh@479 -- # killprocess 79686 00:23:43.506 15:10:59 -- common/autotest_common.sh@936 -- # '[' -z 79686 ']' 00:23:43.506 15:10:59 -- common/autotest_common.sh@940 -- # kill -0 79686 00:23:43.506 15:10:59 -- common/autotest_common.sh@941 -- # uname 00:23:43.506 15:10:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:43.506 15:10:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79686 00:23:43.506 15:10:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:43.506 killing process with pid 79686 00:23:43.506 15:10:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:43.506 15:10:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79686' 00:23:43.506 15:10:59 -- common/autotest_common.sh@955 -- # kill 79686 00:23:43.506 [2024-04-18 15:10:59.066057] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:23:43.506 15:10:59 -- common/autotest_common.sh@960 -- # wait 79686 00:23:43.764 15:10:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:43.764 15:10:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:43.764 15:10:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:43.764 15:10:59 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:43.764 15:10:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:43.764 15:10:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.764 15:10:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.764 15:10:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.764 15:10:59 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:43.764 00:23:43.764 real 0m2.532s 00:23:43.764 user 0m6.375s 00:23:43.764 sys 0m0.833s 00:23:43.764 15:10:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:43.764 15:10:59 -- common/autotest_common.sh@10 -- # set +x 00:23:43.764 ************************************ 00:23:43.764 END TEST nvmf_aer 00:23:43.764 ************************************ 00:23:43.764 15:10:59 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:43.764 15:10:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:43.764 15:10:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:43.764 15:10:59 -- common/autotest_common.sh@10 -- # set +x 00:23:44.023 ************************************ 00:23:44.023 START TEST nvmf_async_init 00:23:44.023 ************************************ 00:23:44.023 15:10:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:44.023 * Looking for test storage... 00:23:44.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:44.023 15:10:59 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:44.023 15:10:59 -- nvmf/common.sh@7 -- # uname -s 00:23:44.023 15:10:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.023 15:10:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.023 15:10:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.023 15:10:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.023 15:10:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.023 15:10:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.023 15:10:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.023 15:10:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.023 15:10:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.023 15:10:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.023 15:10:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:23:44.023 15:10:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:23:44.023 15:10:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.023 15:10:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.023 15:10:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:44.023 15:10:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.023 15:10:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:44.023 15:10:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.023 15:10:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.023 15:10:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.023 15:10:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.023 15:10:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.023 15:10:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.023 15:10:59 -- paths/export.sh@5 -- # export PATH 00:23:44.023 15:10:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.023 15:10:59 -- nvmf/common.sh@47 -- # : 0 00:23:44.023 15:10:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:44.023 15:10:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:44.023 15:10:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.023 15:10:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.023 15:10:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.023 15:10:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:44.023 15:10:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:44.023 15:10:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:44.023 15:10:59 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:44.023 15:10:59 -- host/async_init.sh@14 -- # null_block_size=512 00:23:44.023 15:10:59 -- host/async_init.sh@15 -- # null_bdev=null0 00:23:44.023 15:10:59 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:44.023 15:10:59 -- host/async_init.sh@20 -- # uuidgen 00:23:44.023 15:10:59 -- host/async_init.sh@20 -- # tr -d - 00:23:44.023 15:10:59 -- host/async_init.sh@20 -- # nguid=59669430549c4b0ba3c3df618f5931e2 00:23:44.023 15:10:59 -- host/async_init.sh@22 -- # nvmftestinit 00:23:44.023 15:10:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:44.023 15:10:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.023 15:10:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:44.023 15:10:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:44.023 15:10:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:44.023 15:10:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.023 15:10:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:44.023 15:10:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.023 15:10:59 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:44.023 15:10:59 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:44.023 15:10:59 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:44.023 15:10:59 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:44.023 15:10:59 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:44.023 15:10:59 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:44.023 15:10:59 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.023 15:10:59 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.023 15:10:59 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:44.023 15:10:59 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:44.023 15:10:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:44.023 15:10:59 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:44.023 15:10:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:44.023 15:10:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.023 15:10:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:44.023 15:10:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:44.023 15:10:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:44.023 15:10:59 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:44.023 15:10:59 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:44.282 15:10:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:44.282 Cannot find device "nvmf_tgt_br" 00:23:44.283 15:10:59 -- nvmf/common.sh@155 -- # true 00:23:44.283 15:10:59 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:44.283 Cannot find device "nvmf_tgt_br2" 00:23:44.283 15:10:59 -- nvmf/common.sh@156 -- # true 00:23:44.283 15:10:59 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:44.283 15:10:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:44.283 Cannot find device "nvmf_tgt_br" 00:23:44.283 15:10:59 -- nvmf/common.sh@158 -- # true 00:23:44.283 15:10:59 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:44.283 Cannot find device "nvmf_tgt_br2" 00:23:44.283 15:10:59 -- nvmf/common.sh@159 -- # true 00:23:44.283 15:10:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:44.283 15:10:59 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:44.283 15:10:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:44.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:44.283 15:10:59 -- nvmf/common.sh@162 -- # true 00:23:44.283 15:10:59 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:44.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:44.283 15:10:59 -- nvmf/common.sh@163 -- # true 00:23:44.283 15:10:59 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:44.283 15:10:59 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:44.283 15:10:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:44.283 15:10:59 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:44.283 15:10:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:44.283 15:10:59 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:44.283 15:10:59 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:44.283 15:10:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:44.283 15:10:59 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:44.542 15:10:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:44.542 15:10:59 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:44.542 15:10:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:44.542 15:11:00 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:44.542 15:11:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:44.542 15:11:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:44.542 15:11:00 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:44.542 15:11:00 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:44.542 15:11:00 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:44.542 15:11:00 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:44.542 15:11:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:44.542 15:11:00 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:44.542 15:11:00 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:44.542 15:11:00 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:44.542 15:11:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:44.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:23:44.542 00:23:44.542 --- 10.0.0.2 ping statistics --- 00:23:44.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.542 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:23:44.542 15:11:00 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:44.542 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:44.542 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:23:44.542 00:23:44.542 --- 10.0.0.3 ping statistics --- 00:23:44.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.542 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:23:44.542 15:11:00 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:44.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:23:44.542 00:23:44.542 --- 10.0.0.1 ping statistics --- 00:23:44.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.542 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:23:44.542 15:11:00 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.542 15:11:00 -- nvmf/common.sh@422 -- # return 0 00:23:44.542 15:11:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:44.542 15:11:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.542 15:11:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:44.542 15:11:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:44.542 15:11:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.542 15:11:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:44.542 15:11:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:44.542 15:11:00 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:44.542 15:11:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:44.542 15:11:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:44.542 15:11:00 -- common/autotest_common.sh@10 -- # set +x 00:23:44.542 15:11:00 -- nvmf/common.sh@470 -- # nvmfpid=79926 00:23:44.542 15:11:00 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:44.542 15:11:00 -- nvmf/common.sh@471 -- # waitforlisten 79926 00:23:44.542 15:11:00 -- common/autotest_common.sh@817 -- # '[' -z 79926 ']' 00:23:44.542 15:11:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.542 15:11:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:44.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.542 15:11:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.542 15:11:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:44.542 15:11:00 -- common/autotest_common.sh@10 -- # set +x 00:23:44.542 [2024-04-18 15:11:00.227427] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:44.542 [2024-04-18 15:11:00.227498] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.801 [2024-04-18 15:11:00.370312] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.801 [2024-04-18 15:11:00.471089] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.801 [2024-04-18 15:11:00.471155] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.801 [2024-04-18 15:11:00.471166] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.801 [2024-04-18 15:11:00.471174] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.801 [2024-04-18 15:11:00.471203] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.801 [2024-04-18 15:11:00.471253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.737 15:11:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:45.737 15:11:01 -- common/autotest_common.sh@850 -- # return 0 00:23:45.737 15:11:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:45.737 15:11:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:45.737 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:45.737 15:11:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.737 15:11:01 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:45.737 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.737 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:45.737 [2024-04-18 15:11:01.184479] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.737 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.737 15:11:01 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:45.737 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.737 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:45.737 null0 00:23:45.737 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.737 15:11:01 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:45.737 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.737 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:45.737 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.737 15:11:01 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:45.737 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.737 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:45.737 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.737 15:11:01 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 59669430549c4b0ba3c3df618f5931e2 00:23:45.737 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.737 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:45.737 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.737 15:11:01 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:45.737 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.737 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:45.737 [2024-04-18 15:11:01.240510] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.737 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.737 15:11:01 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:45.737 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.737 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:45.997 nvme0n1 00:23:45.997 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.997 15:11:01 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:45.997 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.997 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:45.997 [ 00:23:45.997 { 00:23:45.997 "aliases": [ 00:23:45.997 "59669430-549c-4b0b-a3c3-df618f5931e2" 00:23:45.997 ], 00:23:45.997 "assigned_rate_limits": { 00:23:45.997 "r_mbytes_per_sec": 0, 00:23:45.997 "rw_ios_per_sec": 0, 00:23:45.997 "rw_mbytes_per_sec": 0, 00:23:45.997 "w_mbytes_per_sec": 0 00:23:45.997 }, 00:23:45.997 "block_size": 512, 00:23:45.997 "claimed": false, 00:23:45.997 "driver_specific": { 00:23:45.997 "mp_policy": "active_passive", 00:23:45.997 "nvme": [ 00:23:45.997 { 00:23:45.997 "ctrlr_data": { 00:23:45.997 "ana_reporting": false, 00:23:45.997 "cntlid": 1, 00:23:45.997 "firmware_revision": "24.05", 00:23:45.997 "model_number": "SPDK bdev Controller", 00:23:45.997 "multi_ctrlr": true, 00:23:45.997 "oacs": { 00:23:45.997 "firmware": 0, 00:23:45.997 "format": 0, 00:23:45.997 "ns_manage": 0, 00:23:45.997 "security": 0 00:23:45.997 }, 00:23:45.997 "serial_number": "00000000000000000000", 00:23:45.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:45.997 "vendor_id": "0x8086" 00:23:45.997 }, 00:23:45.997 "ns_data": { 00:23:45.997 "can_share": true, 00:23:45.997 "id": 1 00:23:45.997 }, 00:23:45.997 "trid": { 00:23:45.997 "adrfam": "IPv4", 00:23:45.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:45.997 "traddr": "10.0.0.2", 00:23:45.997 "trsvcid": "4420", 00:23:45.997 "trtype": "TCP" 00:23:45.997 }, 00:23:45.997 "vs": { 00:23:45.997 "nvme_version": "1.3" 00:23:45.997 } 00:23:45.997 } 00:23:45.997 ] 00:23:45.997 }, 00:23:45.997 "memory_domains": [ 00:23:45.997 { 00:23:45.997 "dma_device_id": "system", 00:23:45.997 "dma_device_type": 1 00:23:45.997 } 00:23:45.997 ], 00:23:45.997 "name": "nvme0n1", 00:23:45.997 "num_blocks": 2097152, 00:23:45.997 "product_name": "NVMe disk", 00:23:45.997 "supported_io_types": { 00:23:45.997 "abort": true, 00:23:45.997 "compare": true, 00:23:45.997 "compare_and_write": true, 00:23:45.997 "flush": true, 00:23:45.997 "nvme_admin": true, 00:23:45.997 "nvme_io": true, 00:23:45.997 "read": true, 00:23:45.997 "reset": true, 00:23:45.997 "unmap": false, 00:23:45.997 "write": true, 00:23:45.997 "write_zeroes": true 00:23:45.997 }, 00:23:45.997 "uuid": "59669430-549c-4b0b-a3c3-df618f5931e2", 00:23:45.997 "zoned": false 00:23:45.997 } 00:23:45.997 ] 00:23:45.997 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.997 15:11:01 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:45.997 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.997 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:45.997 [2024-04-18 15:11:01.516066] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:45.997 [2024-04-18 15:11:01.516165] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da4260 (9): Bad file descriptor 00:23:45.997 [2024-04-18 15:11:01.647732] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:45.997 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.997 15:11:01 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:45.997 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.997 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:45.997 [ 00:23:45.997 { 00:23:45.997 "aliases": [ 00:23:45.997 "59669430-549c-4b0b-a3c3-df618f5931e2" 00:23:45.997 ], 00:23:45.997 "assigned_rate_limits": { 00:23:45.997 "r_mbytes_per_sec": 0, 00:23:45.997 "rw_ios_per_sec": 0, 00:23:45.997 "rw_mbytes_per_sec": 0, 00:23:45.997 "w_mbytes_per_sec": 0 00:23:45.997 }, 00:23:45.997 "block_size": 512, 00:23:45.997 "claimed": false, 00:23:45.997 "driver_specific": { 00:23:45.997 "mp_policy": "active_passive", 00:23:45.997 "nvme": [ 00:23:45.997 { 00:23:45.997 "ctrlr_data": { 00:23:45.997 "ana_reporting": false, 00:23:45.997 "cntlid": 2, 00:23:45.997 "firmware_revision": "24.05", 00:23:45.997 "model_number": "SPDK bdev Controller", 00:23:45.997 "multi_ctrlr": true, 00:23:45.997 "oacs": { 00:23:45.997 "firmware": 0, 00:23:45.997 "format": 0, 00:23:45.997 "ns_manage": 0, 00:23:45.997 "security": 0 00:23:45.997 }, 00:23:45.997 "serial_number": "00000000000000000000", 00:23:45.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:45.997 "vendor_id": "0x8086" 00:23:45.997 }, 00:23:45.997 "ns_data": { 00:23:45.997 "can_share": true, 00:23:45.997 "id": 1 00:23:45.997 }, 00:23:45.997 "trid": { 00:23:45.997 "adrfam": "IPv4", 00:23:45.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:45.997 "traddr": "10.0.0.2", 00:23:45.997 "trsvcid": "4420", 00:23:45.997 "trtype": "TCP" 00:23:45.997 }, 00:23:45.997 "vs": { 00:23:45.997 "nvme_version": "1.3" 00:23:45.997 } 00:23:45.997 } 00:23:45.997 ] 00:23:45.997 }, 00:23:45.997 "memory_domains": [ 00:23:45.997 { 00:23:45.997 "dma_device_id": "system", 00:23:45.997 "dma_device_type": 1 00:23:45.997 } 00:23:45.997 ], 00:23:45.997 "name": "nvme0n1", 00:23:45.997 "num_blocks": 2097152, 00:23:45.997 "product_name": "NVMe disk", 00:23:45.997 "supported_io_types": { 00:23:45.997 "abort": true, 00:23:45.997 "compare": true, 00:23:45.997 "compare_and_write": true, 00:23:45.997 "flush": true, 00:23:45.997 "nvme_admin": true, 00:23:45.997 "nvme_io": true, 00:23:45.997 "read": true, 00:23:45.997 "reset": true, 00:23:45.997 "unmap": false, 00:23:45.997 "write": true, 00:23:45.997 "write_zeroes": true 00:23:45.997 }, 00:23:45.997 "uuid": "59669430-549c-4b0b-a3c3-df618f5931e2", 00:23:45.997 "zoned": false 00:23:45.997 } 00:23:45.997 ] 00:23:45.997 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:45.997 15:11:01 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.997 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:45.997 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:45.997 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.290 15:11:01 -- host/async_init.sh@53 -- # mktemp 00:23:46.290 15:11:01 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.GuGTuKcrKQ 00:23:46.290 15:11:01 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:46.290 15:11:01 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.GuGTuKcrKQ 00:23:46.290 15:11:01 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:46.290 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.290 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.290 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.290 15:11:01 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:46.290 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.290 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.290 [2024-04-18 15:11:01.735973] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:46.290 [2024-04-18 15:11:01.736187] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:46.290 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.290 15:11:01 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GuGTuKcrKQ 00:23:46.290 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.290 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.290 [2024-04-18 15:11:01.747952] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:46.290 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.290 15:11:01 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GuGTuKcrKQ 00:23:46.290 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.290 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.290 [2024-04-18 15:11:01.759943] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:46.290 [2024-04-18 15:11:01.760021] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:46.290 nvme0n1 00:23:46.290 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.290 15:11:01 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:46.290 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.290 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.290 [ 00:23:46.290 { 00:23:46.290 "aliases": [ 00:23:46.290 "59669430-549c-4b0b-a3c3-df618f5931e2" 00:23:46.290 ], 00:23:46.290 "assigned_rate_limits": { 00:23:46.290 "r_mbytes_per_sec": 0, 00:23:46.290 "rw_ios_per_sec": 0, 00:23:46.290 "rw_mbytes_per_sec": 0, 00:23:46.290 "w_mbytes_per_sec": 0 00:23:46.290 }, 00:23:46.290 "block_size": 512, 00:23:46.290 "claimed": false, 00:23:46.290 "driver_specific": { 00:23:46.290 "mp_policy": "active_passive", 00:23:46.290 "nvme": [ 00:23:46.290 { 00:23:46.290 "ctrlr_data": { 00:23:46.290 "ana_reporting": false, 00:23:46.290 "cntlid": 3, 00:23:46.290 "firmware_revision": "24.05", 00:23:46.290 "model_number": "SPDK bdev Controller", 00:23:46.290 "multi_ctrlr": true, 00:23:46.290 "oacs": { 00:23:46.290 "firmware": 0, 00:23:46.290 "format": 0, 00:23:46.290 "ns_manage": 0, 00:23:46.290 "security": 0 00:23:46.290 }, 00:23:46.290 "serial_number": "00000000000000000000", 00:23:46.290 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:46.290 "vendor_id": "0x8086" 00:23:46.290 }, 00:23:46.290 "ns_data": { 00:23:46.290 "can_share": true, 00:23:46.290 "id": 1 00:23:46.290 }, 00:23:46.290 "trid": { 00:23:46.290 "adrfam": "IPv4", 00:23:46.290 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:46.290 "traddr": "10.0.0.2", 00:23:46.290 "trsvcid": "4421", 00:23:46.290 "trtype": "TCP" 00:23:46.290 }, 00:23:46.290 "vs": { 00:23:46.290 "nvme_version": "1.3" 00:23:46.290 } 00:23:46.290 } 00:23:46.290 ] 00:23:46.290 }, 00:23:46.290 "memory_domains": [ 00:23:46.290 { 00:23:46.290 "dma_device_id": "system", 00:23:46.290 "dma_device_type": 1 00:23:46.290 } 00:23:46.290 ], 00:23:46.290 "name": "nvme0n1", 00:23:46.290 "num_blocks": 2097152, 00:23:46.290 "product_name": "NVMe disk", 00:23:46.290 "supported_io_types": { 00:23:46.290 "abort": true, 00:23:46.290 "compare": true, 00:23:46.290 "compare_and_write": true, 00:23:46.290 "flush": true, 00:23:46.290 "nvme_admin": true, 00:23:46.290 "nvme_io": true, 00:23:46.290 "read": true, 00:23:46.290 "reset": true, 00:23:46.290 "unmap": false, 00:23:46.290 "write": true, 00:23:46.290 "write_zeroes": true 00:23:46.290 }, 00:23:46.290 "uuid": "59669430-549c-4b0b-a3c3-df618f5931e2", 00:23:46.290 "zoned": false 00:23:46.290 } 00:23:46.290 ] 00:23:46.290 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.290 15:11:01 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.290 15:11:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:46.291 15:11:01 -- common/autotest_common.sh@10 -- # set +x 00:23:46.291 15:11:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:46.291 15:11:01 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.GuGTuKcrKQ 00:23:46.291 15:11:01 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:46.291 15:11:01 -- host/async_init.sh@78 -- # nvmftestfini 00:23:46.291 15:11:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:46.291 15:11:01 -- nvmf/common.sh@117 -- # sync 00:23:46.291 15:11:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:46.291 15:11:01 -- nvmf/common.sh@120 -- # set +e 00:23:46.291 15:11:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:46.291 15:11:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:46.291 rmmod nvme_tcp 00:23:46.291 rmmod nvme_fabrics 00:23:46.291 rmmod nvme_keyring 00:23:46.557 15:11:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:46.557 15:11:01 -- nvmf/common.sh@124 -- # set -e 00:23:46.557 15:11:01 -- nvmf/common.sh@125 -- # return 0 00:23:46.557 15:11:01 -- nvmf/common.sh@478 -- # '[' -n 79926 ']' 00:23:46.557 15:11:01 -- nvmf/common.sh@479 -- # killprocess 79926 00:23:46.557 15:11:01 -- common/autotest_common.sh@936 -- # '[' -z 79926 ']' 00:23:46.557 15:11:01 -- common/autotest_common.sh@940 -- # kill -0 79926 00:23:46.557 15:11:01 -- common/autotest_common.sh@941 -- # uname 00:23:46.557 15:11:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:46.557 15:11:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79926 00:23:46.557 15:11:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:46.557 killing process with pid 79926 00:23:46.557 15:11:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:46.557 15:11:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79926' 00:23:46.557 15:11:02 -- common/autotest_common.sh@955 -- # kill 79926 00:23:46.557 [2024-04-18 15:11:02.033252] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:46.557 [2024-04-18 15:11:02.033297] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:46.557 15:11:02 -- common/autotest_common.sh@960 -- # wait 79926 00:23:46.557 15:11:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:46.557 15:11:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:46.557 15:11:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:46.557 15:11:02 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:46.557 15:11:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:46.557 15:11:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.557 15:11:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.557 15:11:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.816 15:11:02 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:46.816 00:23:46.816 real 0m2.778s 00:23:46.816 user 0m2.386s 00:23:46.816 sys 0m0.818s 00:23:46.816 15:11:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:46.816 15:11:02 -- common/autotest_common.sh@10 -- # set +x 00:23:46.816 ************************************ 00:23:46.817 END TEST nvmf_async_init 00:23:46.817 ************************************ 00:23:46.817 15:11:02 -- nvmf/nvmf.sh@92 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:46.817 15:11:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:46.817 15:11:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:46.817 15:11:02 -- common/autotest_common.sh@10 -- # set +x 00:23:46.817 ************************************ 00:23:46.817 START TEST dma 00:23:46.817 ************************************ 00:23:46.817 15:11:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:47.075 * Looking for test storage... 00:23:47.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:47.075 15:11:02 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:47.075 15:11:02 -- nvmf/common.sh@7 -- # uname -s 00:23:47.075 15:11:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.075 15:11:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.075 15:11:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.075 15:11:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.075 15:11:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.076 15:11:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.076 15:11:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.076 15:11:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.076 15:11:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.076 15:11:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.076 15:11:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:23:47.076 15:11:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:23:47.076 15:11:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.076 15:11:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.076 15:11:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:47.076 15:11:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.076 15:11:02 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:47.076 15:11:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.076 15:11:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.076 15:11:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.076 15:11:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.076 15:11:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.076 15:11:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.076 15:11:02 -- paths/export.sh@5 -- # export PATH 00:23:47.076 15:11:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.076 15:11:02 -- nvmf/common.sh@47 -- # : 0 00:23:47.076 15:11:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:47.076 15:11:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:47.076 15:11:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.076 15:11:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.076 15:11:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.076 15:11:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:47.076 15:11:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:47.076 15:11:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:47.076 15:11:02 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:47.076 15:11:02 -- host/dma.sh@13 -- # exit 0 00:23:47.076 00:23:47.076 real 0m0.172s 00:23:47.076 user 0m0.074s 00:23:47.076 sys 0m0.110s 00:23:47.076 15:11:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:47.076 15:11:02 -- common/autotest_common.sh@10 -- # set +x 00:23:47.076 ************************************ 00:23:47.076 END TEST dma 00:23:47.076 ************************************ 00:23:47.076 15:11:02 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:47.076 15:11:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:47.076 15:11:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:47.076 15:11:02 -- common/autotest_common.sh@10 -- # set +x 00:23:47.334 ************************************ 00:23:47.334 START TEST nvmf_identify 00:23:47.334 ************************************ 00:23:47.334 15:11:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:47.334 * Looking for test storage... 00:23:47.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:47.334 15:11:02 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:47.334 15:11:02 -- nvmf/common.sh@7 -- # uname -s 00:23:47.334 15:11:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.334 15:11:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.334 15:11:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.334 15:11:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.334 15:11:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.334 15:11:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.334 15:11:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.334 15:11:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.334 15:11:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.334 15:11:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.334 15:11:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:23:47.334 15:11:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:23:47.334 15:11:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.334 15:11:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.334 15:11:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:47.334 15:11:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.334 15:11:02 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:47.334 15:11:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.334 15:11:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.334 15:11:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.335 15:11:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.335 15:11:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.335 15:11:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.335 15:11:02 -- paths/export.sh@5 -- # export PATH 00:23:47.335 15:11:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.335 15:11:02 -- nvmf/common.sh@47 -- # : 0 00:23:47.335 15:11:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:47.335 15:11:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:47.335 15:11:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.335 15:11:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.335 15:11:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.335 15:11:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:47.335 15:11:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:47.335 15:11:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:47.335 15:11:02 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:47.335 15:11:02 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:47.335 15:11:02 -- host/identify.sh@14 -- # nvmftestinit 00:23:47.335 15:11:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:47.335 15:11:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.335 15:11:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:47.335 15:11:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:47.335 15:11:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:47.335 15:11:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.335 15:11:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.335 15:11:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.335 15:11:02 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:47.335 15:11:02 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:47.335 15:11:02 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:47.335 15:11:02 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:47.335 15:11:02 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:47.335 15:11:02 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:47.335 15:11:02 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.335 15:11:02 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:47.335 15:11:02 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:47.335 15:11:02 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:47.335 15:11:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:47.335 15:11:02 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:47.335 15:11:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:47.335 15:11:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.335 15:11:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:47.335 15:11:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:47.335 15:11:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:47.335 15:11:02 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:47.335 15:11:02 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:47.335 15:11:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:47.335 Cannot find device "nvmf_tgt_br" 00:23:47.335 15:11:03 -- nvmf/common.sh@155 -- # true 00:23:47.335 15:11:03 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:47.594 Cannot find device "nvmf_tgt_br2" 00:23:47.594 15:11:03 -- nvmf/common.sh@156 -- # true 00:23:47.594 15:11:03 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:47.594 15:11:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:47.594 Cannot find device "nvmf_tgt_br" 00:23:47.594 15:11:03 -- nvmf/common.sh@158 -- # true 00:23:47.595 15:11:03 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:47.595 Cannot find device "nvmf_tgt_br2" 00:23:47.595 15:11:03 -- nvmf/common.sh@159 -- # true 00:23:47.595 15:11:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:47.595 15:11:03 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:47.595 15:11:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:47.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:47.595 15:11:03 -- nvmf/common.sh@162 -- # true 00:23:47.595 15:11:03 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:47.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:47.595 15:11:03 -- nvmf/common.sh@163 -- # true 00:23:47.595 15:11:03 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:47.595 15:11:03 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:47.595 15:11:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:47.595 15:11:03 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:47.595 15:11:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:47.595 15:11:03 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:47.595 15:11:03 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:47.595 15:11:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:47.595 15:11:03 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:47.595 15:11:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:47.595 15:11:03 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:47.595 15:11:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:47.595 15:11:03 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:47.595 15:11:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:47.867 15:11:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:47.867 15:11:03 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:47.867 15:11:03 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:47.867 15:11:03 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:47.867 15:11:03 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:47.867 15:11:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:47.867 15:11:03 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:47.867 15:11:03 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:47.867 15:11:03 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:47.867 15:11:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:47.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:23:47.867 00:23:47.867 --- 10.0.0.2 ping statistics --- 00:23:47.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.867 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:23:47.867 15:11:03 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:47.867 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:47.867 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:23:47.867 00:23:47.867 --- 10.0.0.3 ping statistics --- 00:23:47.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.867 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:23:47.867 15:11:03 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:47.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:23:47.867 00:23:47.867 --- 10.0.0.1 ping statistics --- 00:23:47.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.868 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:23:47.868 15:11:03 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.868 15:11:03 -- nvmf/common.sh@422 -- # return 0 00:23:47.868 15:11:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:47.868 15:11:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.868 15:11:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:47.868 15:11:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:47.868 15:11:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.868 15:11:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:47.868 15:11:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:47.868 15:11:03 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:47.868 15:11:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:47.868 15:11:03 -- common/autotest_common.sh@10 -- # set +x 00:23:47.868 15:11:03 -- host/identify.sh@19 -- # nvmfpid=80209 00:23:47.868 15:11:03 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:47.868 15:11:03 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:47.868 15:11:03 -- host/identify.sh@23 -- # waitforlisten 80209 00:23:47.868 15:11:03 -- common/autotest_common.sh@817 -- # '[' -z 80209 ']' 00:23:47.868 15:11:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.868 15:11:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:47.868 15:11:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.868 15:11:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:47.868 15:11:03 -- common/autotest_common.sh@10 -- # set +x 00:23:47.868 [2024-04-18 15:11:03.515056] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:47.868 [2024-04-18 15:11:03.515133] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.130 [2024-04-18 15:11:03.661471] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:48.130 [2024-04-18 15:11:03.761892] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.130 [2024-04-18 15:11:03.761961] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.130 [2024-04-18 15:11:03.761972] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.130 [2024-04-18 15:11:03.761982] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.130 [2024-04-18 15:11:03.761989] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.130 [2024-04-18 15:11:03.762191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.130 [2024-04-18 15:11:03.762327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.130 [2024-04-18 15:11:03.763368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.130 [2024-04-18 15:11:03.763367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:49.066 15:11:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:49.066 15:11:04 -- common/autotest_common.sh@850 -- # return 0 00:23:49.066 15:11:04 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:49.066 15:11:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.066 15:11:04 -- common/autotest_common.sh@10 -- # set +x 00:23:49.066 [2024-04-18 15:11:04.525096] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.066 15:11:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.066 15:11:04 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:49.066 15:11:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:49.066 15:11:04 -- common/autotest_common.sh@10 -- # set +x 00:23:49.066 15:11:04 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:49.066 15:11:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.066 15:11:04 -- common/autotest_common.sh@10 -- # set +x 00:23:49.066 Malloc0 00:23:49.066 15:11:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.066 15:11:04 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:49.066 15:11:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.066 15:11:04 -- common/autotest_common.sh@10 -- # set +x 00:23:49.066 15:11:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.066 15:11:04 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:49.066 15:11:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.066 15:11:04 -- common/autotest_common.sh@10 -- # set +x 00:23:49.066 15:11:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.066 15:11:04 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:49.066 15:11:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.066 15:11:04 -- common/autotest_common.sh@10 -- # set +x 00:23:49.066 [2024-04-18 15:11:04.669552] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.066 15:11:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.066 15:11:04 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:49.066 15:11:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.066 15:11:04 -- common/autotest_common.sh@10 -- # set +x 00:23:49.066 15:11:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.066 15:11:04 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:49.066 15:11:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.066 15:11:04 -- common/autotest_common.sh@10 -- # set +x 00:23:49.066 [2024-04-18 15:11:04.693230] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:23:49.066 [ 00:23:49.066 { 00:23:49.066 "allow_any_host": true, 00:23:49.066 "hosts": [], 00:23:49.066 "listen_addresses": [ 00:23:49.066 { 00:23:49.066 "adrfam": "IPv4", 00:23:49.066 "traddr": "10.0.0.2", 00:23:49.066 "transport": "TCP", 00:23:49.066 "trsvcid": "4420", 00:23:49.066 "trtype": "TCP" 00:23:49.066 } 00:23:49.066 ], 00:23:49.066 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:49.066 "subtype": "Discovery" 00:23:49.066 }, 00:23:49.066 { 00:23:49.066 "allow_any_host": true, 00:23:49.066 "hosts": [], 00:23:49.066 "listen_addresses": [ 00:23:49.066 { 00:23:49.066 "adrfam": "IPv4", 00:23:49.066 "traddr": "10.0.0.2", 00:23:49.066 "transport": "TCP", 00:23:49.066 "trsvcid": "4420", 00:23:49.066 "trtype": "TCP" 00:23:49.066 } 00:23:49.066 ], 00:23:49.066 "max_cntlid": 65519, 00:23:49.066 "max_namespaces": 32, 00:23:49.066 "min_cntlid": 1, 00:23:49.066 "model_number": "SPDK bdev Controller", 00:23:49.066 "namespaces": [ 00:23:49.066 { 00:23:49.066 "bdev_name": "Malloc0", 00:23:49.066 "eui64": "ABCDEF0123456789", 00:23:49.066 "name": "Malloc0", 00:23:49.066 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:49.066 "nsid": 1, 00:23:49.066 "uuid": "e1b51001-ae61-46fb-81e9-38c199658f0d" 00:23:49.066 } 00:23:49.066 ], 00:23:49.066 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.066 "serial_number": "SPDK00000000000001", 00:23:49.066 "subtype": "NVMe" 00:23:49.066 } 00:23:49.066 ] 00:23:49.066 15:11:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.066 15:11:04 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:49.066 [2024-04-18 15:11:04.751563] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:49.066 [2024-04-18 15:11:04.751833] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80268 ] 00:23:49.330 [2024-04-18 15:11:04.890034] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:49.330 [2024-04-18 15:11:04.890126] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:49.330 [2024-04-18 15:11:04.890133] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:49.330 [2024-04-18 15:11:04.890150] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:49.330 [2024-04-18 15:11:04.890166] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:49.330 [2024-04-18 15:11:04.890333] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:49.330 [2024-04-18 15:11:04.890377] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c09300 0 00:23:49.330 [2024-04-18 15:11:04.895583] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:49.330 [2024-04-18 15:11:04.895611] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:49.330 [2024-04-18 15:11:04.895617] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:49.330 [2024-04-18 15:11:04.895621] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:49.330 [2024-04-18 15:11:04.895694] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.330 [2024-04-18 15:11:04.895702] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.330 [2024-04-18 15:11:04.895710] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c09300) 00:23:49.330 [2024-04-18 15:11:04.895729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:49.330 [2024-04-18 15:11:04.895760] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c519c0, cid 0, qid 0 00:23:49.330 [2024-04-18 15:11:04.903575] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.330 [2024-04-18 15:11:04.903598] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.330 [2024-04-18 15:11:04.903603] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.330 [2024-04-18 15:11:04.903608] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c519c0) on tqpair=0x1c09300 00:23:49.330 [2024-04-18 15:11:04.903633] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:49.330 [2024-04-18 15:11:04.903642] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:49.330 [2024-04-18 15:11:04.903664] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:49.330 [2024-04-18 15:11:04.903697] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.330 [2024-04-18 15:11:04.903702] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.330 [2024-04-18 15:11:04.903706] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c09300) 00:23:49.330 [2024-04-18 15:11:04.903716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.330 [2024-04-18 15:11:04.903741] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c519c0, cid 0, qid 0 00:23:49.330 [2024-04-18 15:11:04.903816] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.330 [2024-04-18 15:11:04.903822] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.330 [2024-04-18 15:11:04.903826] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.330 [2024-04-18 15:11:04.903830] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c519c0) on tqpair=0x1c09300 00:23:49.330 [2024-04-18 15:11:04.903841] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:49.330 [2024-04-18 15:11:04.903849] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:49.330 [2024-04-18 15:11:04.903856] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.330 [2024-04-18 15:11:04.903860] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.330 [2024-04-18 15:11:04.903864] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c09300) 00:23:49.330 [2024-04-18 15:11:04.903870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.330 [2024-04-18 15:11:04.903885] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c519c0, cid 0, qid 0 00:23:49.330 [2024-04-18 15:11:04.903938] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.330 [2024-04-18 15:11:04.903944] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.330 [2024-04-18 15:11:04.903948] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.330 [2024-04-18 15:11:04.903968] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c519c0) on tqpair=0x1c09300 00:23:49.330 [2024-04-18 15:11:04.903975] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:49.330 [2024-04-18 15:11:04.903983] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:49.330 [2024-04-18 15:11:04.903990] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.330 [2024-04-18 15:11:04.903994] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.330 [2024-04-18 15:11:04.903998] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c09300) 00:23:49.330 [2024-04-18 15:11:04.904005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.330 [2024-04-18 15:11:04.904030] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c519c0, cid 0, qid 0 00:23:49.330 [2024-04-18 15:11:04.904085] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.330 [2024-04-18 15:11:04.904091] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.330 [2024-04-18 15:11:04.904095] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.330 [2024-04-18 15:11:04.904101] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c519c0) on tqpair=0x1c09300 00:23:49.330 [2024-04-18 15:11:04.904107] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:49.330 [2024-04-18 15:11:04.904116] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.330 [2024-04-18 15:11:04.904120] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.330 [2024-04-18 15:11:04.904124] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c09300) 00:23:49.331 [2024-04-18 15:11:04.904130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.331 [2024-04-18 15:11:04.904144] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c519c0, cid 0, qid 0 00:23:49.331 [2024-04-18 15:11:04.904200] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.331 [2024-04-18 15:11:04.904206] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.331 [2024-04-18 15:11:04.904210] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904214] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c519c0) on tqpair=0x1c09300 00:23:49.331 [2024-04-18 15:11:04.904220] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:49.331 [2024-04-18 15:11:04.904225] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:49.331 [2024-04-18 15:11:04.904233] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:49.331 [2024-04-18 15:11:04.904339] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:49.331 [2024-04-18 15:11:04.904344] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:49.331 [2024-04-18 15:11:04.904354] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904358] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904361] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c09300) 00:23:49.331 [2024-04-18 15:11:04.904368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.331 [2024-04-18 15:11:04.904382] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c519c0, cid 0, qid 0 00:23:49.331 [2024-04-18 15:11:04.904430] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.331 [2024-04-18 15:11:04.904436] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.331 [2024-04-18 15:11:04.904440] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904444] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c519c0) on tqpair=0x1c09300 00:23:49.331 [2024-04-18 15:11:04.904450] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:49.331 [2024-04-18 15:11:04.904459] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904463] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904467] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c09300) 00:23:49.331 [2024-04-18 15:11:04.904473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.331 [2024-04-18 15:11:04.904487] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c519c0, cid 0, qid 0 00:23:49.331 [2024-04-18 15:11:04.904542] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.331 [2024-04-18 15:11:04.904548] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.331 [2024-04-18 15:11:04.904568] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904572] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c519c0) on tqpair=0x1c09300 00:23:49.331 [2024-04-18 15:11:04.904578] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:49.331 [2024-04-18 15:11:04.904584] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:49.331 [2024-04-18 15:11:04.904592] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:49.331 [2024-04-18 15:11:04.904602] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:49.331 [2024-04-18 15:11:04.904612] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904638] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c09300) 00:23:49.331 [2024-04-18 15:11:04.904645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.331 [2024-04-18 15:11:04.904659] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c519c0, cid 0, qid 0 00:23:49.331 [2024-04-18 15:11:04.904757] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.331 [2024-04-18 15:11:04.904766] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.331 [2024-04-18 15:11:04.904771] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904776] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c09300): datao=0, datal=4096, cccid=0 00:23:49.331 [2024-04-18 15:11:04.904781] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c519c0) on tqpair(0x1c09300): expected_datao=0, payload_size=4096 00:23:49.331 [2024-04-18 15:11:04.904786] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904794] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904799] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904808] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.331 [2024-04-18 15:11:04.904813] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.331 [2024-04-18 15:11:04.904817] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904821] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c519c0) on tqpair=0x1c09300 00:23:49.331 [2024-04-18 15:11:04.904831] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:49.331 [2024-04-18 15:11:04.904838] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:49.331 [2024-04-18 15:11:04.904843] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:49.331 [2024-04-18 15:11:04.904853] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:49.331 [2024-04-18 15:11:04.904858] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:49.331 [2024-04-18 15:11:04.904864] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:49.331 [2024-04-18 15:11:04.904873] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:49.331 [2024-04-18 15:11:04.904881] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904885] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904889] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c09300) 00:23:49.331 [2024-04-18 15:11:04.904896] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:49.331 [2024-04-18 15:11:04.904911] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c519c0, cid 0, qid 0 00:23:49.331 [2024-04-18 15:11:04.904977] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.331 [2024-04-18 15:11:04.904983] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.331 [2024-04-18 15:11:04.904987] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.904991] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c519c0) on tqpair=0x1c09300 00:23:49.331 [2024-04-18 15:11:04.905000] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.905004] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.905007] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c09300) 00:23:49.331 [2024-04-18 15:11:04.905013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.331 [2024-04-18 15:11:04.905020] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.905024] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.905028] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c09300) 00:23:49.331 [2024-04-18 15:11:04.905034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.331 [2024-04-18 15:11:04.905040] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.905044] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.905047] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c09300) 00:23:49.331 [2024-04-18 15:11:04.905053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.331 [2024-04-18 15:11:04.905059] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.905063] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.331 [2024-04-18 15:11:04.905067] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.331 [2024-04-18 15:11:04.905072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.331 [2024-04-18 15:11:04.905078] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:49.331 [2024-04-18 15:11:04.905090] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:49.331 [2024-04-18 15:11:04.905097] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905101] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c09300) 00:23:49.332 [2024-04-18 15:11:04.905107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.332 [2024-04-18 15:11:04.905122] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c519c0, cid 0, qid 0 00:23:49.332 [2024-04-18 15:11:04.905128] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51b20, cid 1, qid 0 00:23:49.332 [2024-04-18 15:11:04.905133] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51c80, cid 2, qid 0 00:23:49.332 [2024-04-18 15:11:04.905137] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.332 [2024-04-18 15:11:04.905142] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51f40, cid 4, qid 0 00:23:49.332 [2024-04-18 15:11:04.905236] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.332 [2024-04-18 15:11:04.905254] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.332 [2024-04-18 15:11:04.905258] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905262] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51f40) on tqpair=0x1c09300 00:23:49.332 [2024-04-18 15:11:04.905268] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:49.332 [2024-04-18 15:11:04.905274] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:49.332 [2024-04-18 15:11:04.905283] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905287] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c09300) 00:23:49.332 [2024-04-18 15:11:04.905292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.332 [2024-04-18 15:11:04.905305] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51f40, cid 4, qid 0 00:23:49.332 [2024-04-18 15:11:04.905361] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.332 [2024-04-18 15:11:04.905366] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.332 [2024-04-18 15:11:04.905370] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905373] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c09300): datao=0, datal=4096, cccid=4 00:23:49.332 [2024-04-18 15:11:04.905378] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c51f40) on tqpair(0x1c09300): expected_datao=0, payload_size=4096 00:23:49.332 [2024-04-18 15:11:04.905383] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905389] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905392] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905400] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.332 [2024-04-18 15:11:04.905422] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.332 [2024-04-18 15:11:04.905425] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905429] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51f40) on tqpair=0x1c09300 00:23:49.332 [2024-04-18 15:11:04.905444] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:49.332 [2024-04-18 15:11:04.905495] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905503] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c09300) 00:23:49.332 [2024-04-18 15:11:04.905510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.332 [2024-04-18 15:11:04.905517] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905521] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905525] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c09300) 00:23:49.332 [2024-04-18 15:11:04.905531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.332 [2024-04-18 15:11:04.905567] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51f40, cid 4, qid 0 00:23:49.332 [2024-04-18 15:11:04.905574] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c520a0, cid 5, qid 0 00:23:49.332 [2024-04-18 15:11:04.905699] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.332 [2024-04-18 15:11:04.905705] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.332 [2024-04-18 15:11:04.905709] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905713] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c09300): datao=0, datal=1024, cccid=4 00:23:49.332 [2024-04-18 15:11:04.905718] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c51f40) on tqpair(0x1c09300): expected_datao=0, payload_size=1024 00:23:49.332 [2024-04-18 15:11:04.905723] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905729] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905733] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905739] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.332 [2024-04-18 15:11:04.905744] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.332 [2024-04-18 15:11:04.905748] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.905752] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c520a0) on tqpair=0x1c09300 00:23:49.332 [2024-04-18 15:11:04.946642] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.332 [2024-04-18 15:11:04.946678] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.332 [2024-04-18 15:11:04.946683] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.946689] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51f40) on tqpair=0x1c09300 00:23:49.332 [2024-04-18 15:11:04.946724] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.946729] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c09300) 00:23:49.332 [2024-04-18 15:11:04.946741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.332 [2024-04-18 15:11:04.946779] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51f40, cid 4, qid 0 00:23:49.332 [2024-04-18 15:11:04.946876] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.332 [2024-04-18 15:11:04.946882] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.332 [2024-04-18 15:11:04.946886] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.946891] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c09300): datao=0, datal=3072, cccid=4 00:23:49.332 [2024-04-18 15:11:04.946896] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c51f40) on tqpair(0x1c09300): expected_datao=0, payload_size=3072 00:23:49.332 [2024-04-18 15:11:04.946901] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.946909] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.946913] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.946921] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.332 [2024-04-18 15:11:04.946927] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.332 [2024-04-18 15:11:04.946931] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.946935] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51f40) on tqpair=0x1c09300 00:23:49.332 [2024-04-18 15:11:04.946945] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.946949] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c09300) 00:23:49.332 [2024-04-18 15:11:04.946955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.332 [2024-04-18 15:11:04.946975] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51f40, cid 4, qid 0 00:23:49.332 [2024-04-18 15:11:04.947034] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.332 [2024-04-18 15:11:04.947040] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.332 [2024-04-18 15:11:04.947044] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.947048] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c09300): datao=0, datal=8, cccid=4 00:23:49.332 [2024-04-18 15:11:04.947053] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c51f40) on tqpair(0x1c09300): expected_datao=0, payload_size=8 00:23:49.332 [2024-04-18 15:11:04.947057] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.947064] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.947067] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.992639] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.332 [2024-04-18 15:11:04.992692] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.332 [2024-04-18 15:11:04.992698] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.332 [2024-04-18 15:11:04.992706] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51f40) on tqpair=0x1c09300 00:23:49.332 ===================================================== 00:23:49.332 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:49.332 ===================================================== 00:23:49.332 Controller Capabilities/Features 00:23:49.332 ================================ 00:23:49.332 Vendor ID: 0000 00:23:49.332 Subsystem Vendor ID: 0000 00:23:49.333 Serial Number: .................... 00:23:49.333 Model Number: ........................................ 00:23:49.333 Firmware Version: 24.05 00:23:49.333 Recommended Arb Burst: 0 00:23:49.333 IEEE OUI Identifier: 00 00 00 00:23:49.333 Multi-path I/O 00:23:49.333 May have multiple subsystem ports: No 00:23:49.333 May have multiple controllers: No 00:23:49.333 Associated with SR-IOV VF: No 00:23:49.333 Max Data Transfer Size: 131072 00:23:49.333 Max Number of Namespaces: 0 00:23:49.333 Max Number of I/O Queues: 1024 00:23:49.333 NVMe Specification Version (VS): 1.3 00:23:49.333 NVMe Specification Version (Identify): 1.3 00:23:49.333 Maximum Queue Entries: 128 00:23:49.333 Contiguous Queues Required: Yes 00:23:49.333 Arbitration Mechanisms Supported 00:23:49.333 Weighted Round Robin: Not Supported 00:23:49.333 Vendor Specific: Not Supported 00:23:49.333 Reset Timeout: 15000 ms 00:23:49.333 Doorbell Stride: 4 bytes 00:23:49.333 NVM Subsystem Reset: Not Supported 00:23:49.333 Command Sets Supported 00:23:49.333 NVM Command Set: Supported 00:23:49.333 Boot Partition: Not Supported 00:23:49.333 Memory Page Size Minimum: 4096 bytes 00:23:49.333 Memory Page Size Maximum: 4096 bytes 00:23:49.333 Persistent Memory Region: Not Supported 00:23:49.333 Optional Asynchronous Events Supported 00:23:49.333 Namespace Attribute Notices: Not Supported 00:23:49.333 Firmware Activation Notices: Not Supported 00:23:49.333 ANA Change Notices: Not Supported 00:23:49.333 PLE Aggregate Log Change Notices: Not Supported 00:23:49.333 LBA Status Info Alert Notices: Not Supported 00:23:49.333 EGE Aggregate Log Change Notices: Not Supported 00:23:49.333 Normal NVM Subsystem Shutdown event: Not Supported 00:23:49.333 Zone Descriptor Change Notices: Not Supported 00:23:49.333 Discovery Log Change Notices: Supported 00:23:49.333 Controller Attributes 00:23:49.333 128-bit Host Identifier: Not Supported 00:23:49.333 Non-Operational Permissive Mode: Not Supported 00:23:49.333 NVM Sets: Not Supported 00:23:49.333 Read Recovery Levels: Not Supported 00:23:49.333 Endurance Groups: Not Supported 00:23:49.333 Predictable Latency Mode: Not Supported 00:23:49.333 Traffic Based Keep ALive: Not Supported 00:23:49.333 Namespace Granularity: Not Supported 00:23:49.333 SQ Associations: Not Supported 00:23:49.333 UUID List: Not Supported 00:23:49.333 Multi-Domain Subsystem: Not Supported 00:23:49.333 Fixed Capacity Management: Not Supported 00:23:49.333 Variable Capacity Management: Not Supported 00:23:49.333 Delete Endurance Group: Not Supported 00:23:49.333 Delete NVM Set: Not Supported 00:23:49.333 Extended LBA Formats Supported: Not Supported 00:23:49.333 Flexible Data Placement Supported: Not Supported 00:23:49.333 00:23:49.333 Controller Memory Buffer Support 00:23:49.333 ================================ 00:23:49.333 Supported: No 00:23:49.333 00:23:49.333 Persistent Memory Region Support 00:23:49.333 ================================ 00:23:49.333 Supported: No 00:23:49.333 00:23:49.333 Admin Command Set Attributes 00:23:49.333 ============================ 00:23:49.333 Security Send/Receive: Not Supported 00:23:49.333 Format NVM: Not Supported 00:23:49.333 Firmware Activate/Download: Not Supported 00:23:49.333 Namespace Management: Not Supported 00:23:49.333 Device Self-Test: Not Supported 00:23:49.333 Directives: Not Supported 00:23:49.333 NVMe-MI: Not Supported 00:23:49.333 Virtualization Management: Not Supported 00:23:49.333 Doorbell Buffer Config: Not Supported 00:23:49.333 Get LBA Status Capability: Not Supported 00:23:49.333 Command & Feature Lockdown Capability: Not Supported 00:23:49.333 Abort Command Limit: 1 00:23:49.333 Async Event Request Limit: 4 00:23:49.333 Number of Firmware Slots: N/A 00:23:49.333 Firmware Slot 1 Read-Only: N/A 00:23:49.333 Firmware Activation Without Reset: N/A 00:23:49.333 Multiple Update Detection Support: N/A 00:23:49.333 Firmware Update Granularity: No Information Provided 00:23:49.333 Per-Namespace SMART Log: No 00:23:49.333 Asymmetric Namespace Access Log Page: Not Supported 00:23:49.333 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:49.333 Command Effects Log Page: Not Supported 00:23:49.333 Get Log Page Extended Data: Supported 00:23:49.333 Telemetry Log Pages: Not Supported 00:23:49.333 Persistent Event Log Pages: Not Supported 00:23:49.333 Supported Log Pages Log Page: May Support 00:23:49.333 Commands Supported & Effects Log Page: Not Supported 00:23:49.333 Feature Identifiers & Effects Log Page:May Support 00:23:49.333 NVMe-MI Commands & Effects Log Page: May Support 00:23:49.333 Data Area 4 for Telemetry Log: Not Supported 00:23:49.333 Error Log Page Entries Supported: 128 00:23:49.333 Keep Alive: Not Supported 00:23:49.333 00:23:49.333 NVM Command Set Attributes 00:23:49.333 ========================== 00:23:49.333 Submission Queue Entry Size 00:23:49.333 Max: 1 00:23:49.333 Min: 1 00:23:49.333 Completion Queue Entry Size 00:23:49.333 Max: 1 00:23:49.333 Min: 1 00:23:49.333 Number of Namespaces: 0 00:23:49.333 Compare Command: Not Supported 00:23:49.333 Write Uncorrectable Command: Not Supported 00:23:49.333 Dataset Management Command: Not Supported 00:23:49.333 Write Zeroes Command: Not Supported 00:23:49.333 Set Features Save Field: Not Supported 00:23:49.333 Reservations: Not Supported 00:23:49.333 Timestamp: Not Supported 00:23:49.333 Copy: Not Supported 00:23:49.333 Volatile Write Cache: Not Present 00:23:49.333 Atomic Write Unit (Normal): 1 00:23:49.333 Atomic Write Unit (PFail): 1 00:23:49.333 Atomic Compare & Write Unit: 1 00:23:49.333 Fused Compare & Write: Supported 00:23:49.333 Scatter-Gather List 00:23:49.333 SGL Command Set: Supported 00:23:49.333 SGL Keyed: Supported 00:23:49.333 SGL Bit Bucket Descriptor: Not Supported 00:23:49.333 SGL Metadata Pointer: Not Supported 00:23:49.333 Oversized SGL: Not Supported 00:23:49.333 SGL Metadata Address: Not Supported 00:23:49.333 SGL Offset: Supported 00:23:49.333 Transport SGL Data Block: Not Supported 00:23:49.333 Replay Protected Memory Block: Not Supported 00:23:49.333 00:23:49.333 Firmware Slot Information 00:23:49.333 ========================= 00:23:49.333 Active slot: 0 00:23:49.333 00:23:49.333 00:23:49.333 Error Log 00:23:49.333 ========= 00:23:49.333 00:23:49.333 Active Namespaces 00:23:49.333 ================= 00:23:49.333 Discovery Log Page 00:23:49.333 ================== 00:23:49.333 Generation Counter: 2 00:23:49.333 Number of Records: 2 00:23:49.333 Record Format: 0 00:23:49.333 00:23:49.333 Discovery Log Entry 0 00:23:49.333 ---------------------- 00:23:49.333 Transport Type: 3 (TCP) 00:23:49.333 Address Family: 1 (IPv4) 00:23:49.333 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:49.333 Entry Flags: 00:23:49.333 Duplicate Returned Information: 1 00:23:49.333 Explicit Persistent Connection Support for Discovery: 1 00:23:49.333 Transport Requirements: 00:23:49.333 Secure Channel: Not Required 00:23:49.333 Port ID: 0 (0x0000) 00:23:49.333 Controller ID: 65535 (0xffff) 00:23:49.333 Admin Max SQ Size: 128 00:23:49.333 Transport Service Identifier: 4420 00:23:49.333 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:49.333 Transport Address: 10.0.0.2 00:23:49.333 Discovery Log Entry 1 00:23:49.333 ---------------------- 00:23:49.333 Transport Type: 3 (TCP) 00:23:49.333 Address Family: 1 (IPv4) 00:23:49.333 Subsystem Type: 2 (NVM Subsystem) 00:23:49.333 Entry Flags: 00:23:49.333 Duplicate Returned Information: 0 00:23:49.333 Explicit Persistent Connection Support for Discovery: 0 00:23:49.333 Transport Requirements: 00:23:49.334 Secure Channel: Not Required 00:23:49.334 Port ID: 0 (0x0000) 00:23:49.334 Controller ID: 65535 (0xffff) 00:23:49.334 Admin Max SQ Size: 128 00:23:49.334 Transport Service Identifier: 4420 00:23:49.334 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:49.334 Transport Address: 10.0.0.2 [2024-04-18 15:11:04.992853] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:49.334 [2024-04-18 15:11:04.992872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.334 [2024-04-18 15:11:04.992880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.334 [2024-04-18 15:11:04.992887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.334 [2024-04-18 15:11:04.992894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.334 [2024-04-18 15:11:04.992909] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.992914] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.992918] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.334 [2024-04-18 15:11:04.992931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.334 [2024-04-18 15:11:04.992961] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.334 [2024-04-18 15:11:04.993038] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.334 [2024-04-18 15:11:04.993056] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.334 [2024-04-18 15:11:04.993061] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993065] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.334 [2024-04-18 15:11:04.993080] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993085] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993089] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.334 [2024-04-18 15:11:04.993096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.334 [2024-04-18 15:11:04.993115] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.334 [2024-04-18 15:11:04.993217] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.334 [2024-04-18 15:11:04.993223] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.334 [2024-04-18 15:11:04.993227] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993231] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.334 [2024-04-18 15:11:04.993238] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:49.334 [2024-04-18 15:11:04.993243] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:49.334 [2024-04-18 15:11:04.993253] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993257] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993261] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.334 [2024-04-18 15:11:04.993267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.334 [2024-04-18 15:11:04.993282] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.334 [2024-04-18 15:11:04.993342] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.334 [2024-04-18 15:11:04.993348] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.334 [2024-04-18 15:11:04.993352] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993356] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.334 [2024-04-18 15:11:04.993368] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993372] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993376] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.334 [2024-04-18 15:11:04.993382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.334 [2024-04-18 15:11:04.993397] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.334 [2024-04-18 15:11:04.993442] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.334 [2024-04-18 15:11:04.993448] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.334 [2024-04-18 15:11:04.993452] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993456] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.334 [2024-04-18 15:11:04.993466] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993470] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993474] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.334 [2024-04-18 15:11:04.993480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.334 [2024-04-18 15:11:04.993506] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.334 [2024-04-18 15:11:04.993588] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.334 [2024-04-18 15:11:04.993595] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.334 [2024-04-18 15:11:04.993599] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993603] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.334 [2024-04-18 15:11:04.993613] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993618] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993622] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.334 [2024-04-18 15:11:04.993628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.334 [2024-04-18 15:11:04.993643] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.334 [2024-04-18 15:11:04.993701] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.334 [2024-04-18 15:11:04.993707] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.334 [2024-04-18 15:11:04.993711] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993715] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.334 [2024-04-18 15:11:04.993725] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993729] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993733] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.334 [2024-04-18 15:11:04.993739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.334 [2024-04-18 15:11:04.993754] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.334 [2024-04-18 15:11:04.993805] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.334 [2024-04-18 15:11:04.993811] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.334 [2024-04-18 15:11:04.993815] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993819] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.334 [2024-04-18 15:11:04.993829] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993833] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993837] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.334 [2024-04-18 15:11:04.993843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.334 [2024-04-18 15:11:04.993857] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.334 [2024-04-18 15:11:04.993907] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.334 [2024-04-18 15:11:04.993913] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.334 [2024-04-18 15:11:04.993917] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993921] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.334 [2024-04-18 15:11:04.993931] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993935] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.334 [2024-04-18 15:11:04.993939] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.334 [2024-04-18 15:11:04.993945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.334 [2024-04-18 15:11:04.993959] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.334 [2024-04-18 15:11:04.994004] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.335 [2024-04-18 15:11:04.994010] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.335 [2024-04-18 15:11:04.994014] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994018] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.335 [2024-04-18 15:11:04.994029] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994036] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994044] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.335 [2024-04-18 15:11:04.994051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.335 [2024-04-18 15:11:04.994066] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.335 [2024-04-18 15:11:04.994111] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.335 [2024-04-18 15:11:04.994117] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.335 [2024-04-18 15:11:04.994121] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994125] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.335 [2024-04-18 15:11:04.994135] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994139] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994143] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.335 [2024-04-18 15:11:04.994149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.335 [2024-04-18 15:11:04.994164] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.335 [2024-04-18 15:11:04.994211] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.335 [2024-04-18 15:11:04.994217] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.335 [2024-04-18 15:11:04.994221] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994225] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.335 [2024-04-18 15:11:04.994235] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994239] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994243] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.335 [2024-04-18 15:11:04.994250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.335 [2024-04-18 15:11:04.994264] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.335 [2024-04-18 15:11:04.994311] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.335 [2024-04-18 15:11:04.994317] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.335 [2024-04-18 15:11:04.994321] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994325] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.335 [2024-04-18 15:11:04.994335] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994339] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994343] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.335 [2024-04-18 15:11:04.994350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.335 [2024-04-18 15:11:04.994364] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.335 [2024-04-18 15:11:04.994411] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.335 [2024-04-18 15:11:04.994417] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.335 [2024-04-18 15:11:04.994421] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994425] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.335 [2024-04-18 15:11:04.994435] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994439] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994443] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.335 [2024-04-18 15:11:04.994449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.335 [2024-04-18 15:11:04.994463] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.335 [2024-04-18 15:11:04.994513] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.335 [2024-04-18 15:11:04.994519] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.335 [2024-04-18 15:11:04.994523] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994527] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.335 [2024-04-18 15:11:04.994546] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994551] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994555] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.335 [2024-04-18 15:11:04.994561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.335 [2024-04-18 15:11:04.994576] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.335 [2024-04-18 15:11:04.994626] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.335 [2024-04-18 15:11:04.994633] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.335 [2024-04-18 15:11:04.994637] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994641] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.335 [2024-04-18 15:11:04.994651] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994655] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994659] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.335 [2024-04-18 15:11:04.994666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.335 [2024-04-18 15:11:04.994680] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.335 [2024-04-18 15:11:04.994729] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.335 [2024-04-18 15:11:04.994735] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.335 [2024-04-18 15:11:04.994739] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994743] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.335 [2024-04-18 15:11:04.994754] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994758] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.335 [2024-04-18 15:11:04.994762] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.335 [2024-04-18 15:11:04.994768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.335 [2024-04-18 15:11:04.994782] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.335 [2024-04-18 15:11:04.994832] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.336 [2024-04-18 15:11:04.994838] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.336 [2024-04-18 15:11:04.994842] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.994846] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.336 [2024-04-18 15:11:04.994856] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.994860] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.994864] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.336 [2024-04-18 15:11:04.994871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.336 [2024-04-18 15:11:04.994885] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.336 [2024-04-18 15:11:04.994935] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.336 [2024-04-18 15:11:04.994941] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.336 [2024-04-18 15:11:04.994945] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.994949] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.336 [2024-04-18 15:11:04.994959] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.994963] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.994967] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.336 [2024-04-18 15:11:04.994974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.336 [2024-04-18 15:11:04.994988] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.336 [2024-04-18 15:11:04.995032] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.336 [2024-04-18 15:11:04.995038] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.336 [2024-04-18 15:11:04.995042] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995046] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.336 [2024-04-18 15:11:04.995056] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995060] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995064] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.336 [2024-04-18 15:11:04.995071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.336 [2024-04-18 15:11:04.995085] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.336 [2024-04-18 15:11:04.995145] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.336 [2024-04-18 15:11:04.995151] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.336 [2024-04-18 15:11:04.995155] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995159] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.336 [2024-04-18 15:11:04.995168] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995172] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995176] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.336 [2024-04-18 15:11:04.995182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.336 [2024-04-18 15:11:04.995196] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.336 [2024-04-18 15:11:04.995239] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.336 [2024-04-18 15:11:04.995245] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.336 [2024-04-18 15:11:04.995249] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995253] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.336 [2024-04-18 15:11:04.995263] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995267] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995270] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.336 [2024-04-18 15:11:04.995277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.336 [2024-04-18 15:11:04.995290] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.336 [2024-04-18 15:11:04.995341] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.336 [2024-04-18 15:11:04.995347] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.336 [2024-04-18 15:11:04.995351] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995355] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.336 [2024-04-18 15:11:04.995364] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995369] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995372] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.336 [2024-04-18 15:11:04.995379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.336 [2024-04-18 15:11:04.995393] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.336 [2024-04-18 15:11:04.995439] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.336 [2024-04-18 15:11:04.995445] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.336 [2024-04-18 15:11:04.995449] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995453] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.336 [2024-04-18 15:11:04.995462] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995467] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995470] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.336 [2024-04-18 15:11:04.995477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.336 [2024-04-18 15:11:04.995491] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.336 [2024-04-18 15:11:04.995534] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.336 [2024-04-18 15:11:04.995540] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.336 [2024-04-18 15:11:04.995543] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995555] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.336 [2024-04-18 15:11:04.995565] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995585] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995590] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.336 [2024-04-18 15:11:04.995596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.336 [2024-04-18 15:11:04.995611] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.336 [2024-04-18 15:11:04.995661] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.336 [2024-04-18 15:11:04.995667] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.336 [2024-04-18 15:11:04.995671] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995675] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.336 [2024-04-18 15:11:04.995696] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995700] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995704] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.336 [2024-04-18 15:11:04.995711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.336 [2024-04-18 15:11:04.995724] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.336 [2024-04-18 15:11:04.995773] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.336 [2024-04-18 15:11:04.995779] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.336 [2024-04-18 15:11:04.995782] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995786] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.336 [2024-04-18 15:11:04.995796] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995800] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995804] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.336 [2024-04-18 15:11:04.995810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.336 [2024-04-18 15:11:04.995824] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.336 [2024-04-18 15:11:04.995871] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.336 [2024-04-18 15:11:04.995877] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.336 [2024-04-18 15:11:04.995880] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995884] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.336 [2024-04-18 15:11:04.995894] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995898] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.336 [2024-04-18 15:11:04.995902] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.337 [2024-04-18 15:11:04.995908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.337 [2024-04-18 15:11:04.995922] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.337 [2024-04-18 15:11:04.995988] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.337 [2024-04-18 15:11:04.995994] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.337 [2024-04-18 15:11:04.995997] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996002] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.337 [2024-04-18 15:11:04.996012] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996016] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996020] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.337 [2024-04-18 15:11:04.996026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.337 [2024-04-18 15:11:04.996040] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.337 [2024-04-18 15:11:04.996088] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.337 [2024-04-18 15:11:04.996094] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.337 [2024-04-18 15:11:04.996098] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996102] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.337 [2024-04-18 15:11:04.996112] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996116] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996120] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.337 [2024-04-18 15:11:04.996127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.337 [2024-04-18 15:11:04.996141] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.337 [2024-04-18 15:11:04.996189] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.337 [2024-04-18 15:11:04.996195] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.337 [2024-04-18 15:11:04.996199] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996203] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.337 [2024-04-18 15:11:04.996213] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996217] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996221] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.337 [2024-04-18 15:11:04.996227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.337 [2024-04-18 15:11:04.996242] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.337 [2024-04-18 15:11:04.996286] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.337 [2024-04-18 15:11:04.996292] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.337 [2024-04-18 15:11:04.996296] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996300] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.337 [2024-04-18 15:11:04.996310] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996314] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996318] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.337 [2024-04-18 15:11:04.996325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.337 [2024-04-18 15:11:04.996339] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.337 [2024-04-18 15:11:04.996389] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.337 [2024-04-18 15:11:04.996395] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.337 [2024-04-18 15:11:04.996399] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996403] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.337 [2024-04-18 15:11:04.996413] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996417] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996421] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.337 [2024-04-18 15:11:04.996427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.337 [2024-04-18 15:11:04.996441] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.337 [2024-04-18 15:11:04.996493] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.337 [2024-04-18 15:11:04.996500] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.337 [2024-04-18 15:11:04.996503] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996508] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.337 [2024-04-18 15:11:04.996517] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996522] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.996526] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.337 [2024-04-18 15:11:04.996532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.337 [2024-04-18 15:11:04.996546] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.337 [2024-04-18 15:11:04.999633] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.337 [2024-04-18 15:11:04.999656] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.337 [2024-04-18 15:11:04.999660] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.999665] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.337 [2024-04-18 15:11:04.999681] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.999685] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.999690] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c09300) 00:23:49.337 [2024-04-18 15:11:04.999698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.337 [2024-04-18 15:11:04.999719] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c51de0, cid 3, qid 0 00:23:49.337 [2024-04-18 15:11:04.999777] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.337 [2024-04-18 15:11:04.999783] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.337 [2024-04-18 15:11:04.999787] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.337 [2024-04-18 15:11:04.999792] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c51de0) on tqpair=0x1c09300 00:23:49.337 [2024-04-18 15:11:04.999801] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:49.337 00:23:49.337 15:11:05 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:49.674 [2024-04-18 15:11:05.048774] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:49.674 [2024-04-18 15:11:05.048827] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80270 ] 00:23:49.674 [2024-04-18 15:11:05.189360] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:49.674 [2024-04-18 15:11:05.189459] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:49.674 [2024-04-18 15:11:05.189467] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:49.674 [2024-04-18 15:11:05.189486] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:49.674 [2024-04-18 15:11:05.189527] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:49.674 [2024-04-18 15:11:05.189685] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:49.674 [2024-04-18 15:11:05.189730] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1bef300 0 00:23:49.674 [2024-04-18 15:11:05.195632] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:49.674 [2024-04-18 15:11:05.195657] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:49.674 [2024-04-18 15:11:05.195663] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:49.674 [2024-04-18 15:11:05.195667] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:49.674 [2024-04-18 15:11:05.195724] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.674 [2024-04-18 15:11:05.195731] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.674 [2024-04-18 15:11:05.195735] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bef300) 00:23:49.674 [2024-04-18 15:11:05.195753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:49.674 [2024-04-18 15:11:05.195783] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c379c0, cid 0, qid 0 00:23:49.674 [2024-04-18 15:11:05.203599] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.674 [2024-04-18 15:11:05.203622] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.674 [2024-04-18 15:11:05.203627] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.674 [2024-04-18 15:11:05.203632] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c379c0) on tqpair=0x1bef300 00:23:49.674 [2024-04-18 15:11:05.203649] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:49.674 [2024-04-18 15:11:05.203659] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:49.674 [2024-04-18 15:11:05.203666] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:49.674 [2024-04-18 15:11:05.203691] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.674 [2024-04-18 15:11:05.203696] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.674 [2024-04-18 15:11:05.203700] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bef300) 00:23:49.674 [2024-04-18 15:11:05.203711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.674 [2024-04-18 15:11:05.203743] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c379c0, cid 0, qid 0 00:23:49.674 [2024-04-18 15:11:05.203825] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.674 [2024-04-18 15:11:05.203831] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.674 [2024-04-18 15:11:05.203835] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.674 [2024-04-18 15:11:05.203839] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c379c0) on tqpair=0x1bef300 00:23:49.674 [2024-04-18 15:11:05.203850] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:49.674 [2024-04-18 15:11:05.203858] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:49.674 [2024-04-18 15:11:05.203866] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.674 [2024-04-18 15:11:05.203871] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.674 [2024-04-18 15:11:05.203875] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bef300) 00:23:49.674 [2024-04-18 15:11:05.203881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.674 [2024-04-18 15:11:05.203897] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c379c0, cid 0, qid 0 00:23:49.674 [2024-04-18 15:11:05.203951] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.674 [2024-04-18 15:11:05.203957] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.674 [2024-04-18 15:11:05.203961] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.674 [2024-04-18 15:11:05.203965] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c379c0) on tqpair=0x1bef300 00:23:49.674 [2024-04-18 15:11:05.203972] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:49.674 [2024-04-18 15:11:05.203980] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:49.674 [2024-04-18 15:11:05.203987] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.674 [2024-04-18 15:11:05.203991] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.674 [2024-04-18 15:11:05.203994] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bef300) 00:23:49.674 [2024-04-18 15:11:05.204001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.674 [2024-04-18 15:11:05.204015] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c379c0, cid 0, qid 0 00:23:49.674 [2024-04-18 15:11:05.204061] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.674 [2024-04-18 15:11:05.204067] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.674 [2024-04-18 15:11:05.204071] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.674 [2024-04-18 15:11:05.204075] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c379c0) on tqpair=0x1bef300 00:23:49.674 [2024-04-18 15:11:05.204081] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:49.674 [2024-04-18 15:11:05.204091] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.674 [2024-04-18 15:11:05.204095] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.674 [2024-04-18 15:11:05.204099] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bef300) 00:23:49.674 [2024-04-18 15:11:05.204105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.674 [2024-04-18 15:11:05.204119] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c379c0, cid 0, qid 0 00:23:49.674 [2024-04-18 15:11:05.204165] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.674 [2024-04-18 15:11:05.204171] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.674 [2024-04-18 15:11:05.204175] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.674 [2024-04-18 15:11:05.204179] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c379c0) on tqpair=0x1bef300 00:23:49.674 [2024-04-18 15:11:05.204184] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:49.675 [2024-04-18 15:11:05.204190] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:49.675 [2024-04-18 15:11:05.204198] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:49.675 [2024-04-18 15:11:05.204304] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:49.675 [2024-04-18 15:11:05.204309] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:49.675 [2024-04-18 15:11:05.204318] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204322] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204326] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bef300) 00:23:49.675 [2024-04-18 15:11:05.204332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.675 [2024-04-18 15:11:05.204348] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c379c0, cid 0, qid 0 00:23:49.675 [2024-04-18 15:11:05.204402] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.675 [2024-04-18 15:11:05.204409] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.675 [2024-04-18 15:11:05.204413] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204417] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c379c0) on tqpair=0x1bef300 00:23:49.675 [2024-04-18 15:11:05.204423] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:49.675 [2024-04-18 15:11:05.204432] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204436] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204440] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bef300) 00:23:49.675 [2024-04-18 15:11:05.204446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.675 [2024-04-18 15:11:05.204460] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c379c0, cid 0, qid 0 00:23:49.675 [2024-04-18 15:11:05.204538] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.675 [2024-04-18 15:11:05.204545] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.675 [2024-04-18 15:11:05.204549] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204553] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c379c0) on tqpair=0x1bef300 00:23:49.675 [2024-04-18 15:11:05.204558] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:49.675 [2024-04-18 15:11:05.204564] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:49.675 [2024-04-18 15:11:05.204583] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:49.675 [2024-04-18 15:11:05.204594] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:49.675 [2024-04-18 15:11:05.204623] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204627] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bef300) 00:23:49.675 [2024-04-18 15:11:05.204634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.675 [2024-04-18 15:11:05.204650] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c379c0, cid 0, qid 0 00:23:49.675 [2024-04-18 15:11:05.204751] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.675 [2024-04-18 15:11:05.204757] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.675 [2024-04-18 15:11:05.204761] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204766] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bef300): datao=0, datal=4096, cccid=0 00:23:49.675 [2024-04-18 15:11:05.204771] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c379c0) on tqpair(0x1bef300): expected_datao=0, payload_size=4096 00:23:49.675 [2024-04-18 15:11:05.204777] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204786] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204791] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204800] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.675 [2024-04-18 15:11:05.204806] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.675 [2024-04-18 15:11:05.204809] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204814] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c379c0) on tqpair=0x1bef300 00:23:49.675 [2024-04-18 15:11:05.204825] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:49.675 [2024-04-18 15:11:05.204831] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:49.675 [2024-04-18 15:11:05.204836] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:49.675 [2024-04-18 15:11:05.204845] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:49.675 [2024-04-18 15:11:05.204850] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:49.675 [2024-04-18 15:11:05.204856] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:49.675 [2024-04-18 15:11:05.204865] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:49.675 [2024-04-18 15:11:05.204873] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204877] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204881] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bef300) 00:23:49.675 [2024-04-18 15:11:05.204888] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:49.675 [2024-04-18 15:11:05.204903] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c379c0, cid 0, qid 0 00:23:49.675 [2024-04-18 15:11:05.204957] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.675 [2024-04-18 15:11:05.204963] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.675 [2024-04-18 15:11:05.204967] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204971] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c379c0) on tqpair=0x1bef300 00:23:49.675 [2024-04-18 15:11:05.204980] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204985] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.204989] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1bef300) 00:23:49.675 [2024-04-18 15:11:05.204995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.675 [2024-04-18 15:11:05.205002] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.205006] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.205010] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1bef300) 00:23:49.675 [2024-04-18 15:11:05.205015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.675 [2024-04-18 15:11:05.205022] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.205026] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.205030] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1bef300) 00:23:49.675 [2024-04-18 15:11:05.205035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.675 [2024-04-18 15:11:05.205042] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.205046] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.205050] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bef300) 00:23:49.675 [2024-04-18 15:11:05.205056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.675 [2024-04-18 15:11:05.205061] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:49.675 [2024-04-18 15:11:05.205073] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:49.675 [2024-04-18 15:11:05.205080] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.205084] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bef300) 00:23:49.675 [2024-04-18 15:11:05.205091] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.675 [2024-04-18 15:11:05.205107] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c379c0, cid 0, qid 0 00:23:49.675 [2024-04-18 15:11:05.205112] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c37b20, cid 1, qid 0 00:23:49.675 [2024-04-18 15:11:05.205117] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c37c80, cid 2, qid 0 00:23:49.675 [2024-04-18 15:11:05.205122] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c37de0, cid 3, qid 0 00:23:49.675 [2024-04-18 15:11:05.205127] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c37f40, cid 4, qid 0 00:23:49.675 [2024-04-18 15:11:05.205212] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.675 [2024-04-18 15:11:05.205218] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.675 [2024-04-18 15:11:05.205222] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.675 [2024-04-18 15:11:05.205226] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c37f40) on tqpair=0x1bef300 00:23:49.676 [2024-04-18 15:11:05.205232] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:49.676 [2024-04-18 15:11:05.205238] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:49.676 [2024-04-18 15:11:05.205247] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:49.676 [2024-04-18 15:11:05.205254] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:49.676 [2024-04-18 15:11:05.205260] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205264] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205268] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bef300) 00:23:49.676 [2024-04-18 15:11:05.205275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:49.676 [2024-04-18 15:11:05.205290] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c37f40, cid 4, qid 0 00:23:49.676 [2024-04-18 15:11:05.205334] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.676 [2024-04-18 15:11:05.205342] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.676 [2024-04-18 15:11:05.205347] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205353] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c37f40) on tqpair=0x1bef300 00:23:49.676 [2024-04-18 15:11:05.205400] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:49.676 [2024-04-18 15:11:05.205411] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:49.676 [2024-04-18 15:11:05.205421] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205426] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bef300) 00:23:49.676 [2024-04-18 15:11:05.205434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.676 [2024-04-18 15:11:05.205451] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c37f40, cid 4, qid 0 00:23:49.676 [2024-04-18 15:11:05.205521] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.676 [2024-04-18 15:11:05.205528] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.676 [2024-04-18 15:11:05.205532] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205536] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bef300): datao=0, datal=4096, cccid=4 00:23:49.676 [2024-04-18 15:11:05.205551] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c37f40) on tqpair(0x1bef300): expected_datao=0, payload_size=4096 00:23:49.676 [2024-04-18 15:11:05.205556] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205565] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205569] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205578] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.676 [2024-04-18 15:11:05.205584] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.676 [2024-04-18 15:11:05.205588] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205592] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c37f40) on tqpair=0x1bef300 00:23:49.676 [2024-04-18 15:11:05.205604] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:49.676 [2024-04-18 15:11:05.205617] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:49.676 [2024-04-18 15:11:05.205627] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:49.676 [2024-04-18 15:11:05.205635] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205639] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bef300) 00:23:49.676 [2024-04-18 15:11:05.205645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.676 [2024-04-18 15:11:05.205663] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c37f40, cid 4, qid 0 00:23:49.676 [2024-04-18 15:11:05.205737] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.676 [2024-04-18 15:11:05.205743] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.676 [2024-04-18 15:11:05.205747] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205751] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bef300): datao=0, datal=4096, cccid=4 00:23:49.676 [2024-04-18 15:11:05.205756] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c37f40) on tqpair(0x1bef300): expected_datao=0, payload_size=4096 00:23:49.676 [2024-04-18 15:11:05.205761] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205767] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205771] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205779] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.676 [2024-04-18 15:11:05.205785] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.676 [2024-04-18 15:11:05.205789] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205793] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c37f40) on tqpair=0x1bef300 00:23:49.676 [2024-04-18 15:11:05.205809] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:49.676 [2024-04-18 15:11:05.205819] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:49.676 [2024-04-18 15:11:05.205826] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205830] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bef300) 00:23:49.676 [2024-04-18 15:11:05.205836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.676 [2024-04-18 15:11:05.205852] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c37f40, cid 4, qid 0 00:23:49.676 [2024-04-18 15:11:05.205906] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.676 [2024-04-18 15:11:05.205912] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.676 [2024-04-18 15:11:05.205916] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205920] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bef300): datao=0, datal=4096, cccid=4 00:23:49.676 [2024-04-18 15:11:05.205925] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c37f40) on tqpair(0x1bef300): expected_datao=0, payload_size=4096 00:23:49.676 [2024-04-18 15:11:05.205930] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205936] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205940] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205948] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.676 [2024-04-18 15:11:05.205954] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.676 [2024-04-18 15:11:05.205958] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.205962] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c37f40) on tqpair=0x1bef300 00:23:49.676 [2024-04-18 15:11:05.205972] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:49.676 [2024-04-18 15:11:05.205980] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:49.676 [2024-04-18 15:11:05.205993] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:49.676 [2024-04-18 15:11:05.206000] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:49.676 [2024-04-18 15:11:05.206006] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:49.676 [2024-04-18 15:11:05.206013] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:49.676 [2024-04-18 15:11:05.206018] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:49.676 [2024-04-18 15:11:05.206024] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:49.676 [2024-04-18 15:11:05.206045] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.206049] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bef300) 00:23:49.676 [2024-04-18 15:11:05.206056] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.676 [2024-04-18 15:11:05.206063] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.206067] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.676 [2024-04-18 15:11:05.206071] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bef300) 00:23:49.676 [2024-04-18 15:11:05.206078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.676 [2024-04-18 15:11:05.206098] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c37f40, cid 4, qid 0 00:23:49.676 [2024-04-18 15:11:05.206103] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c380a0, cid 5, qid 0 00:23:49.676 [2024-04-18 15:11:05.206164] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.676 [2024-04-18 15:11:05.206170] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.676 [2024-04-18 15:11:05.206174] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206179] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c37f40) on tqpair=0x1bef300 00:23:49.677 [2024-04-18 15:11:05.206186] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.677 [2024-04-18 15:11:05.206192] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.677 [2024-04-18 15:11:05.206196] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206200] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c380a0) on tqpair=0x1bef300 00:23:49.677 [2024-04-18 15:11:05.206211] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206215] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bef300) 00:23:49.677 [2024-04-18 15:11:05.206221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.677 [2024-04-18 15:11:05.206236] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c380a0, cid 5, qid 0 00:23:49.677 [2024-04-18 15:11:05.206283] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.677 [2024-04-18 15:11:05.206290] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.677 [2024-04-18 15:11:05.206294] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206298] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c380a0) on tqpair=0x1bef300 00:23:49.677 [2024-04-18 15:11:05.206309] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206313] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bef300) 00:23:49.677 [2024-04-18 15:11:05.206319] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.677 [2024-04-18 15:11:05.206333] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c380a0, cid 5, qid 0 00:23:49.677 [2024-04-18 15:11:05.206389] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.677 [2024-04-18 15:11:05.206396] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.677 [2024-04-18 15:11:05.206400] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206404] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c380a0) on tqpair=0x1bef300 00:23:49.677 [2024-04-18 15:11:05.206415] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206419] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bef300) 00:23:49.677 [2024-04-18 15:11:05.206425] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.677 [2024-04-18 15:11:05.206439] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c380a0, cid 5, qid 0 00:23:49.677 [2024-04-18 15:11:05.206487] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.677 [2024-04-18 15:11:05.206493] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.677 [2024-04-18 15:11:05.206497] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206501] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c380a0) on tqpair=0x1bef300 00:23:49.677 [2024-04-18 15:11:05.206514] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206518] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1bef300) 00:23:49.677 [2024-04-18 15:11:05.206525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.677 [2024-04-18 15:11:05.206532] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206546] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1bef300) 00:23:49.677 [2024-04-18 15:11:05.206553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.677 [2024-04-18 15:11:05.206560] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206564] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1bef300) 00:23:49.677 [2024-04-18 15:11:05.206571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.677 [2024-04-18 15:11:05.206579] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206583] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1bef300) 00:23:49.677 [2024-04-18 15:11:05.206589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.677 [2024-04-18 15:11:05.206617] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c380a0, cid 5, qid 0 00:23:49.677 [2024-04-18 15:11:05.206623] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c37f40, cid 4, qid 0 00:23:49.677 [2024-04-18 15:11:05.206628] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c38200, cid 6, qid 0 00:23:49.677 [2024-04-18 15:11:05.206632] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c38360, cid 7, qid 0 00:23:49.677 [2024-04-18 15:11:05.206748] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.677 [2024-04-18 15:11:05.206755] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.677 [2024-04-18 15:11:05.206759] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206763] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bef300): datao=0, datal=8192, cccid=5 00:23:49.677 [2024-04-18 15:11:05.206768] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c380a0) on tqpair(0x1bef300): expected_datao=0, payload_size=8192 00:23:49.677 [2024-04-18 15:11:05.206773] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206794] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206801] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206808] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.677 [2024-04-18 15:11:05.206814] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.677 [2024-04-18 15:11:05.206817] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206821] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bef300): datao=0, datal=512, cccid=4 00:23:49.677 [2024-04-18 15:11:05.206826] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c37f40) on tqpair(0x1bef300): expected_datao=0, payload_size=512 00:23:49.677 [2024-04-18 15:11:05.206831] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206837] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206841] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.677 [2024-04-18 15:11:05.206852] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.677 [2024-04-18 15:11:05.206856] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206859] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bef300): datao=0, datal=512, cccid=6 00:23:49.677 [2024-04-18 15:11:05.206864] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c38200) on tqpair(0x1bef300): expected_datao=0, payload_size=512 00:23:49.677 [2024-04-18 15:11:05.206869] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206875] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206879] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206885] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.677 [2024-04-18 15:11:05.206890] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.677 [2024-04-18 15:11:05.206894] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206898] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1bef300): datao=0, datal=4096, cccid=7 00:23:49.677 [2024-04-18 15:11:05.206903] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c38360) on tqpair(0x1bef300): expected_datao=0, payload_size=4096 00:23:49.677 [2024-04-18 15:11:05.206907] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206914] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206918] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206926] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.677 [2024-04-18 15:11:05.206932] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.677 [2024-04-18 15:11:05.206935] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206939] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c380a0) on tqpair=0x1bef300 00:23:49.677 [2024-04-18 15:11:05.206958] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.677 [2024-04-18 15:11:05.206964] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.677 [2024-04-18 15:11:05.206967] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206971] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c37f40) on tqpair=0x1bef300 00:23:49.677 [2024-04-18 15:11:05.206983] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.677 [2024-04-18 15:11:05.206989] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.677 [2024-04-18 15:11:05.206993] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.206997] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c38200) on tqpair=0x1bef300 00:23:49.677 [2024-04-18 15:11:05.207005] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.677 [2024-04-18 15:11:05.207011] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.677 [2024-04-18 15:11:05.207015] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.677 [2024-04-18 15:11:05.207019] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c38360) on tqpair=0x1bef300 00:23:49.677 ===================================================== 00:23:49.678 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.678 ===================================================== 00:23:49.678 Controller Capabilities/Features 00:23:49.678 ================================ 00:23:49.678 Vendor ID: 8086 00:23:49.678 Subsystem Vendor ID: 8086 00:23:49.678 Serial Number: SPDK00000000000001 00:23:49.678 Model Number: SPDK bdev Controller 00:23:49.678 Firmware Version: 24.05 00:23:49.678 Recommended Arb Burst: 6 00:23:49.678 IEEE OUI Identifier: e4 d2 5c 00:23:49.678 Multi-path I/O 00:23:49.678 May have multiple subsystem ports: Yes 00:23:49.678 May have multiple controllers: Yes 00:23:49.678 Associated with SR-IOV VF: No 00:23:49.678 Max Data Transfer Size: 131072 00:23:49.678 Max Number of Namespaces: 32 00:23:49.678 Max Number of I/O Queues: 127 00:23:49.678 NVMe Specification Version (VS): 1.3 00:23:49.678 NVMe Specification Version (Identify): 1.3 00:23:49.678 Maximum Queue Entries: 128 00:23:49.678 Contiguous Queues Required: Yes 00:23:49.678 Arbitration Mechanisms Supported 00:23:49.678 Weighted Round Robin: Not Supported 00:23:49.678 Vendor Specific: Not Supported 00:23:49.678 Reset Timeout: 15000 ms 00:23:49.678 Doorbell Stride: 4 bytes 00:23:49.678 NVM Subsystem Reset: Not Supported 00:23:49.678 Command Sets Supported 00:23:49.678 NVM Command Set: Supported 00:23:49.678 Boot Partition: Not Supported 00:23:49.678 Memory Page Size Minimum: 4096 bytes 00:23:49.678 Memory Page Size Maximum: 4096 bytes 00:23:49.678 Persistent Memory Region: Not Supported 00:23:49.678 Optional Asynchronous Events Supported 00:23:49.678 Namespace Attribute Notices: Supported 00:23:49.678 Firmware Activation Notices: Not Supported 00:23:49.678 ANA Change Notices: Not Supported 00:23:49.678 PLE Aggregate Log Change Notices: Not Supported 00:23:49.678 LBA Status Info Alert Notices: Not Supported 00:23:49.678 EGE Aggregate Log Change Notices: Not Supported 00:23:49.678 Normal NVM Subsystem Shutdown event: Not Supported 00:23:49.678 Zone Descriptor Change Notices: Not Supported 00:23:49.678 Discovery Log Change Notices: Not Supported 00:23:49.678 Controller Attributes 00:23:49.678 128-bit Host Identifier: Supported 00:23:49.678 Non-Operational Permissive Mode: Not Supported 00:23:49.678 NVM Sets: Not Supported 00:23:49.678 Read Recovery Levels: Not Supported 00:23:49.678 Endurance Groups: Not Supported 00:23:49.678 Predictable Latency Mode: Not Supported 00:23:49.678 Traffic Based Keep ALive: Not Supported 00:23:49.678 Namespace Granularity: Not Supported 00:23:49.678 SQ Associations: Not Supported 00:23:49.678 UUID List: Not Supported 00:23:49.678 Multi-Domain Subsystem: Not Supported 00:23:49.678 Fixed Capacity Management: Not Supported 00:23:49.678 Variable Capacity Management: Not Supported 00:23:49.678 Delete Endurance Group: Not Supported 00:23:49.678 Delete NVM Set: Not Supported 00:23:49.678 Extended LBA Formats Supported: Not Supported 00:23:49.678 Flexible Data Placement Supported: Not Supported 00:23:49.678 00:23:49.678 Controller Memory Buffer Support 00:23:49.678 ================================ 00:23:49.678 Supported: No 00:23:49.678 00:23:49.678 Persistent Memory Region Support 00:23:49.678 ================================ 00:23:49.678 Supported: No 00:23:49.678 00:23:49.678 Admin Command Set Attributes 00:23:49.678 ============================ 00:23:49.678 Security Send/Receive: Not Supported 00:23:49.678 Format NVM: Not Supported 00:23:49.678 Firmware Activate/Download: Not Supported 00:23:49.678 Namespace Management: Not Supported 00:23:49.678 Device Self-Test: Not Supported 00:23:49.678 Directives: Not Supported 00:23:49.678 NVMe-MI: Not Supported 00:23:49.678 Virtualization Management: Not Supported 00:23:49.678 Doorbell Buffer Config: Not Supported 00:23:49.678 Get LBA Status Capability: Not Supported 00:23:49.678 Command & Feature Lockdown Capability: Not Supported 00:23:49.678 Abort Command Limit: 4 00:23:49.678 Async Event Request Limit: 4 00:23:49.678 Number of Firmware Slots: N/A 00:23:49.678 Firmware Slot 1 Read-Only: N/A 00:23:49.678 Firmware Activation Without Reset: N/A 00:23:49.678 Multiple Update Detection Support: N/A 00:23:49.678 Firmware Update Granularity: No Information Provided 00:23:49.678 Per-Namespace SMART Log: No 00:23:49.678 Asymmetric Namespace Access Log Page: Not Supported 00:23:49.678 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:49.678 Command Effects Log Page: Supported 00:23:49.678 Get Log Page Extended Data: Supported 00:23:49.678 Telemetry Log Pages: Not Supported 00:23:49.678 Persistent Event Log Pages: Not Supported 00:23:49.678 Supported Log Pages Log Page: May Support 00:23:49.678 Commands Supported & Effects Log Page: Not Supported 00:23:49.678 Feature Identifiers & Effects Log Page:May Support 00:23:49.678 NVMe-MI Commands & Effects Log Page: May Support 00:23:49.678 Data Area 4 for Telemetry Log: Not Supported 00:23:49.678 Error Log Page Entries Supported: 128 00:23:49.678 Keep Alive: Supported 00:23:49.678 Keep Alive Granularity: 10000 ms 00:23:49.678 00:23:49.678 NVM Command Set Attributes 00:23:49.678 ========================== 00:23:49.678 Submission Queue Entry Size 00:23:49.678 Max: 64 00:23:49.678 Min: 64 00:23:49.678 Completion Queue Entry Size 00:23:49.678 Max: 16 00:23:49.678 Min: 16 00:23:49.678 Number of Namespaces: 32 00:23:49.678 Compare Command: Supported 00:23:49.678 Write Uncorrectable Command: Not Supported 00:23:49.678 Dataset Management Command: Supported 00:23:49.678 Write Zeroes Command: Supported 00:23:49.678 Set Features Save Field: Not Supported 00:23:49.678 Reservations: Supported 00:23:49.678 Timestamp: Not Supported 00:23:49.678 Copy: Supported 00:23:49.678 Volatile Write Cache: Present 00:23:49.678 Atomic Write Unit (Normal): 1 00:23:49.678 Atomic Write Unit (PFail): 1 00:23:49.678 Atomic Compare & Write Unit: 1 00:23:49.678 Fused Compare & Write: Supported 00:23:49.678 Scatter-Gather List 00:23:49.678 SGL Command Set: Supported 00:23:49.678 SGL Keyed: Supported 00:23:49.678 SGL Bit Bucket Descriptor: Not Supported 00:23:49.678 SGL Metadata Pointer: Not Supported 00:23:49.678 Oversized SGL: Not Supported 00:23:49.678 SGL Metadata Address: Not Supported 00:23:49.678 SGL Offset: Supported 00:23:49.678 Transport SGL Data Block: Not Supported 00:23:49.678 Replay Protected Memory Block: Not Supported 00:23:49.678 00:23:49.678 Firmware Slot Information 00:23:49.678 ========================= 00:23:49.678 Active slot: 1 00:23:49.678 Slot 1 Firmware Revision: 24.05 00:23:49.678 00:23:49.678 00:23:49.678 Commands Supported and Effects 00:23:49.678 ============================== 00:23:49.678 Admin Commands 00:23:49.678 -------------- 00:23:49.678 Get Log Page (02h): Supported 00:23:49.678 Identify (06h): Supported 00:23:49.678 Abort (08h): Supported 00:23:49.678 Set Features (09h): Supported 00:23:49.678 Get Features (0Ah): Supported 00:23:49.678 Asynchronous Event Request (0Ch): Supported 00:23:49.678 Keep Alive (18h): Supported 00:23:49.678 I/O Commands 00:23:49.678 ------------ 00:23:49.678 Flush (00h): Supported LBA-Change 00:23:49.678 Write (01h): Supported LBA-Change 00:23:49.678 Read (02h): Supported 00:23:49.678 Compare (05h): Supported 00:23:49.678 Write Zeroes (08h): Supported LBA-Change 00:23:49.678 Dataset Management (09h): Supported LBA-Change 00:23:49.678 Copy (19h): Supported LBA-Change 00:23:49.678 Unknown (79h): Supported LBA-Change 00:23:49.678 Unknown (7Ah): Supported 00:23:49.678 00:23:49.678 Error Log 00:23:49.678 ========= 00:23:49.678 00:23:49.678 Arbitration 00:23:49.678 =========== 00:23:49.678 Arbitration Burst: 1 00:23:49.678 00:23:49.678 Power Management 00:23:49.678 ================ 00:23:49.679 Number of Power States: 1 00:23:49.679 Current Power State: Power State #0 00:23:49.679 Power State #0: 00:23:49.679 Max Power: 0.00 W 00:23:49.679 Non-Operational State: Operational 00:23:49.679 Entry Latency: Not Reported 00:23:49.679 Exit Latency: Not Reported 00:23:49.679 Relative Read Throughput: 0 00:23:49.679 Relative Read Latency: 0 00:23:49.679 Relative Write Throughput: 0 00:23:49.679 Relative Write Latency: 0 00:23:49.679 Idle Power: Not Reported 00:23:49.679 Active Power: Not Reported 00:23:49.679 Non-Operational Permissive Mode: Not Supported 00:23:49.679 00:23:49.679 Health Information 00:23:49.679 ================== 00:23:49.679 Critical Warnings: 00:23:49.679 Available Spare Space: OK 00:23:49.679 Temperature: OK 00:23:49.679 Device Reliability: OK 00:23:49.679 Read Only: No 00:23:49.679 Volatile Memory Backup: OK 00:23:49.679 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:49.679 Temperature Threshold: [2024-04-18 15:11:05.207152] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.679 [2024-04-18 15:11:05.207158] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1bef300) 00:23:49.679 [2024-04-18 15:11:05.207165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.679 [2024-04-18 15:11:05.207183] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c38360, cid 7, qid 0 00:23:49.679 [2024-04-18 15:11:05.207244] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.679 [2024-04-18 15:11:05.207250] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.679 [2024-04-18 15:11:05.207254] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.679 [2024-04-18 15:11:05.207259] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c38360) on tqpair=0x1bef300 00:23:49.679 [2024-04-18 15:11:05.207294] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:49.679 [2024-04-18 15:11:05.207307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.679 [2024-04-18 15:11:05.207314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.679 [2024-04-18 15:11:05.207321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.679 [2024-04-18 15:11:05.207327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.679 [2024-04-18 15:11:05.207336] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.679 [2024-04-18 15:11:05.207340] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.679 [2024-04-18 15:11:05.207344] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bef300) 00:23:49.679 [2024-04-18 15:11:05.207351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.679 [2024-04-18 15:11:05.207369] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c37de0, cid 3, qid 0 00:23:49.679 [2024-04-18 15:11:05.207417] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.679 [2024-04-18 15:11:05.207423] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.679 [2024-04-18 15:11:05.207427] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.679 [2024-04-18 15:11:05.207431] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c37de0) on tqpair=0x1bef300 00:23:49.679 [2024-04-18 15:11:05.207440] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.679 [2024-04-18 15:11:05.207444] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.679 [2024-04-18 15:11:05.207448] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bef300) 00:23:49.679 [2024-04-18 15:11:05.207454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.679 [2024-04-18 15:11:05.207471] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c37de0, cid 3, qid 0 00:23:49.679 [2024-04-18 15:11:05.207542] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.679 [2024-04-18 15:11:05.207548] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.679 [2024-04-18 15:11:05.207552] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.679 [2024-04-18 15:11:05.207556] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c37de0) on tqpair=0x1bef300 00:23:49.679 [2024-04-18 15:11:05.207563] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:49.679 [2024-04-18 15:11:05.211611] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:49.679 [2024-04-18 15:11:05.211631] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.679 [2024-04-18 15:11:05.211636] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.679 [2024-04-18 15:11:05.211640] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1bef300) 00:23:49.679 [2024-04-18 15:11:05.211649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.679 [2024-04-18 15:11:05.211676] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c37de0, cid 3, qid 0 00:23:49.679 [2024-04-18 15:11:05.211737] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.679 [2024-04-18 15:11:05.211744] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.679 [2024-04-18 15:11:05.211748] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.679 [2024-04-18 15:11:05.211752] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c37de0) on tqpair=0x1bef300 00:23:49.679 [2024-04-18 15:11:05.211761] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:23:49.679 0 Kelvin (-273 Celsius) 00:23:49.679 Available Spare: 0% 00:23:49.679 Available Spare Threshold: 0% 00:23:49.679 Life Percentage Used: 0% 00:23:49.679 Data Units Read: 0 00:23:49.679 Data Units Written: 0 00:23:49.679 Host Read Commands: 0 00:23:49.679 Host Write Commands: 0 00:23:49.679 Controller Busy Time: 0 minutes 00:23:49.679 Power Cycles: 0 00:23:49.679 Power On Hours: 0 hours 00:23:49.679 Unsafe Shutdowns: 0 00:23:49.679 Unrecoverable Media Errors: 0 00:23:49.679 Lifetime Error Log Entries: 0 00:23:49.679 Warning Temperature Time: 0 minutes 00:23:49.679 Critical Temperature Time: 0 minutes 00:23:49.679 00:23:49.679 Number of Queues 00:23:49.679 ================ 00:23:49.679 Number of I/O Submission Queues: 127 00:23:49.679 Number of I/O Completion Queues: 127 00:23:49.679 00:23:49.679 Active Namespaces 00:23:49.679 ================= 00:23:49.679 Namespace ID:1 00:23:49.679 Error Recovery Timeout: Unlimited 00:23:49.679 Command Set Identifier: NVM (00h) 00:23:49.679 Deallocate: Supported 00:23:49.679 Deallocated/Unwritten Error: Not Supported 00:23:49.679 Deallocated Read Value: Unknown 00:23:49.679 Deallocate in Write Zeroes: Not Supported 00:23:49.679 Deallocated Guard Field: 0xFFFF 00:23:49.679 Flush: Supported 00:23:49.679 Reservation: Supported 00:23:49.679 Namespace Sharing Capabilities: Multiple Controllers 00:23:49.679 Size (in LBAs): 131072 (0GiB) 00:23:49.679 Capacity (in LBAs): 131072 (0GiB) 00:23:49.679 Utilization (in LBAs): 131072 (0GiB) 00:23:49.679 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:49.679 EUI64: ABCDEF0123456789 00:23:49.679 UUID: e1b51001-ae61-46fb-81e9-38c199658f0d 00:23:49.679 Thin Provisioning: Not Supported 00:23:49.679 Per-NS Atomic Units: Yes 00:23:49.679 Atomic Boundary Size (Normal): 0 00:23:49.679 Atomic Boundary Size (PFail): 0 00:23:49.679 Atomic Boundary Offset: 0 00:23:49.679 Maximum Single Source Range Length: 65535 00:23:49.679 Maximum Copy Length: 65535 00:23:49.680 Maximum Source Range Count: 1 00:23:49.680 NGUID/EUI64 Never Reused: No 00:23:49.680 Namespace Write Protected: No 00:23:49.680 Number of LBA Formats: 1 00:23:49.680 Current LBA Format: LBA Format #00 00:23:49.680 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:49.680 00:23:49.680 15:11:05 -- host/identify.sh@51 -- # sync 00:23:49.680 15:11:05 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:49.680 15:11:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.680 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:23:49.680 15:11:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.680 15:11:05 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:49.680 15:11:05 -- host/identify.sh@56 -- # nvmftestfini 00:23:49.680 15:11:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:49.680 15:11:05 -- nvmf/common.sh@117 -- # sync 00:23:49.680 15:11:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.680 15:11:05 -- nvmf/common.sh@120 -- # set +e 00:23:49.680 15:11:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.680 15:11:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.680 rmmod nvme_tcp 00:23:49.680 rmmod nvme_fabrics 00:23:49.680 rmmod nvme_keyring 00:23:49.680 15:11:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.680 15:11:05 -- nvmf/common.sh@124 -- # set -e 00:23:49.680 15:11:05 -- nvmf/common.sh@125 -- # return 0 00:23:49.680 15:11:05 -- nvmf/common.sh@478 -- # '[' -n 80209 ']' 00:23:49.680 15:11:05 -- nvmf/common.sh@479 -- # killprocess 80209 00:23:49.680 15:11:05 -- common/autotest_common.sh@936 -- # '[' -z 80209 ']' 00:23:49.680 15:11:05 -- common/autotest_common.sh@940 -- # kill -0 80209 00:23:49.680 15:11:05 -- common/autotest_common.sh@941 -- # uname 00:23:49.680 15:11:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:49.680 15:11:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80209 00:23:49.938 killing process with pid 80209 00:23:49.938 15:11:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:49.938 15:11:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:49.938 15:11:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80209' 00:23:49.938 15:11:05 -- common/autotest_common.sh@955 -- # kill 80209 00:23:49.938 [2024-04-18 15:11:05.385565] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:23:49.938 15:11:05 -- common/autotest_common.sh@960 -- # wait 80209 00:23:49.938 15:11:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:49.938 15:11:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:49.938 15:11:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:49.938 15:11:05 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:49.938 15:11:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:49.938 15:11:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.938 15:11:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.938 15:11:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.197 15:11:05 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:50.197 ************************************ 00:23:50.197 END TEST nvmf_identify 00:23:50.197 ************************************ 00:23:50.197 00:23:50.197 real 0m2.902s 00:23:50.197 user 0m7.699s 00:23:50.197 sys 0m0.861s 00:23:50.197 15:11:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:50.197 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:23:50.197 15:11:05 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:50.197 15:11:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:50.197 15:11:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:50.197 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:23:50.197 ************************************ 00:23:50.197 START TEST nvmf_perf 00:23:50.197 ************************************ 00:23:50.197 15:11:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:50.456 * Looking for test storage... 00:23:50.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:50.456 15:11:05 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:50.456 15:11:05 -- nvmf/common.sh@7 -- # uname -s 00:23:50.456 15:11:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.456 15:11:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.456 15:11:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.456 15:11:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.456 15:11:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.456 15:11:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.456 15:11:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.456 15:11:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.456 15:11:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.456 15:11:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.456 15:11:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:23:50.456 15:11:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:23:50.456 15:11:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.456 15:11:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.456 15:11:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:50.456 15:11:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.456 15:11:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:50.456 15:11:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.456 15:11:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.456 15:11:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.456 15:11:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.456 15:11:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.456 15:11:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.456 15:11:06 -- paths/export.sh@5 -- # export PATH 00:23:50.456 15:11:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.456 15:11:06 -- nvmf/common.sh@47 -- # : 0 00:23:50.456 15:11:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.456 15:11:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.456 15:11:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.456 15:11:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.456 15:11:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.456 15:11:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.456 15:11:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.456 15:11:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.456 15:11:06 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:50.456 15:11:06 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:50.456 15:11:06 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:50.456 15:11:06 -- host/perf.sh@17 -- # nvmftestinit 00:23:50.456 15:11:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:50.456 15:11:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.456 15:11:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:50.456 15:11:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:50.456 15:11:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:50.456 15:11:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.456 15:11:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.456 15:11:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.456 15:11:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:50.456 15:11:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:50.456 15:11:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:50.456 15:11:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:50.456 15:11:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:50.456 15:11:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:50.456 15:11:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.456 15:11:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.456 15:11:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:50.456 15:11:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:50.456 15:11:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:50.456 15:11:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:50.456 15:11:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:50.456 15:11:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.456 15:11:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:50.456 15:11:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:50.456 15:11:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:50.456 15:11:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:50.456 15:11:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:50.456 15:11:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:50.456 Cannot find device "nvmf_tgt_br" 00:23:50.456 15:11:06 -- nvmf/common.sh@155 -- # true 00:23:50.456 15:11:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:50.456 Cannot find device "nvmf_tgt_br2" 00:23:50.456 15:11:06 -- nvmf/common.sh@156 -- # true 00:23:50.456 15:11:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:50.456 15:11:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:50.456 Cannot find device "nvmf_tgt_br" 00:23:50.456 15:11:06 -- nvmf/common.sh@158 -- # true 00:23:50.456 15:11:06 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:50.456 Cannot find device "nvmf_tgt_br2" 00:23:50.456 15:11:06 -- nvmf/common.sh@159 -- # true 00:23:50.456 15:11:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:50.715 15:11:06 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:50.715 15:11:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:50.715 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:50.715 15:11:06 -- nvmf/common.sh@162 -- # true 00:23:50.715 15:11:06 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:50.715 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:50.715 15:11:06 -- nvmf/common.sh@163 -- # true 00:23:50.715 15:11:06 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:50.715 15:11:06 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:50.715 15:11:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:50.715 15:11:06 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:50.715 15:11:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:50.715 15:11:06 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:50.715 15:11:06 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:50.715 15:11:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:50.716 15:11:06 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:50.716 15:11:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:50.716 15:11:06 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:50.716 15:11:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:50.716 15:11:06 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:50.716 15:11:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:50.716 15:11:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:50.716 15:11:06 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:50.716 15:11:06 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:50.716 15:11:06 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:50.716 15:11:06 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:50.716 15:11:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:50.974 15:11:06 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:50.974 15:11:06 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:50.974 15:11:06 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:50.974 15:11:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:50.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:23:50.974 00:23:50.974 --- 10.0.0.2 ping statistics --- 00:23:50.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.974 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:23:50.974 15:11:06 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:50.974 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:50.974 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:23:50.974 00:23:50.974 --- 10.0.0.3 ping statistics --- 00:23:50.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.974 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:50.974 15:11:06 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:50.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:23:50.974 00:23:50.974 --- 10.0.0.1 ping statistics --- 00:23:50.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.974 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:50.974 15:11:06 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.974 15:11:06 -- nvmf/common.sh@422 -- # return 0 00:23:50.974 15:11:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:50.974 15:11:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.974 15:11:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:50.974 15:11:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:50.974 15:11:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.974 15:11:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:50.974 15:11:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:50.974 15:11:06 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:50.974 15:11:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:50.974 15:11:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:50.974 15:11:06 -- common/autotest_common.sh@10 -- # set +x 00:23:50.974 15:11:06 -- nvmf/common.sh@470 -- # nvmfpid=80444 00:23:50.974 15:11:06 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:50.974 15:11:06 -- nvmf/common.sh@471 -- # waitforlisten 80444 00:23:50.974 15:11:06 -- common/autotest_common.sh@817 -- # '[' -z 80444 ']' 00:23:50.974 15:11:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.974 15:11:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:50.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.974 15:11:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.974 15:11:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:50.974 15:11:06 -- common/autotest_common.sh@10 -- # set +x 00:23:50.974 [2024-04-18 15:11:06.575944] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:23:50.974 [2024-04-18 15:11:06.576042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.233 [2024-04-18 15:11:06.710092] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.233 [2024-04-18 15:11:06.813697] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.233 [2024-04-18 15:11:06.813762] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.233 [2024-04-18 15:11:06.813776] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.233 [2024-04-18 15:11:06.813788] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.233 [2024-04-18 15:11:06.813798] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.233 [2024-04-18 15:11:06.814005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.233 [2024-04-18 15:11:06.814142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.233 [2024-04-18 15:11:06.815149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.233 [2024-04-18 15:11:06.815152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.798 15:11:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:51.798 15:11:07 -- common/autotest_common.sh@850 -- # return 0 00:23:51.798 15:11:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:51.798 15:11:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:51.798 15:11:07 -- common/autotest_common.sh@10 -- # set +x 00:23:52.055 15:11:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.055 15:11:07 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:23:52.055 15:11:07 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:52.313 15:11:07 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:23:52.313 15:11:07 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:52.571 15:11:08 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:23:52.571 15:11:08 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:52.829 15:11:08 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:52.829 15:11:08 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:23:52.829 15:11:08 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:52.829 15:11:08 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:52.829 15:11:08 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:53.088 [2024-04-18 15:11:08.584488] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.088 15:11:08 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:53.346 15:11:08 -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:53.346 15:11:08 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:53.638 15:11:09 -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:53.638 15:11:09 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:53.638 15:11:09 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.897 [2024-04-18 15:11:09.540550] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.897 15:11:09 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:54.156 15:11:09 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:54.156 15:11:09 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:54.156 15:11:09 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:54.156 15:11:09 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:55.534 Initializing NVMe Controllers 00:23:55.534 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:55.534 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:55.534 Initialization complete. Launching workers. 00:23:55.534 ======================================================== 00:23:55.534 Latency(us) 00:23:55.534 Device Information : IOPS MiB/s Average min max 00:23:55.534 PCIE (0000:00:10.0) NSID 1 from core 0: 22312.38 87.16 1435.10 315.32 9245.94 00:23:55.534 ======================================================== 00:23:55.534 Total : 22312.38 87.16 1435.10 315.32 9245.94 00:23:55.534 00:23:55.534 15:11:10 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:56.915 Initializing NVMe Controllers 00:23:56.915 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:56.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:56.915 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:56.915 Initialization complete. Launching workers. 00:23:56.915 ======================================================== 00:23:56.915 Latency(us) 00:23:56.915 Device Information : IOPS MiB/s Average min max 00:23:56.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4658.00 18.20 213.58 83.48 7179.93 00:23:56.915 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8060.11 5038.78 12060.24 00:23:56.915 ======================================================== 00:23:56.915 Total : 4783.00 18.68 418.65 83.48 12060.24 00:23:56.915 00:23:56.915 15:11:12 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:58.309 Initializing NVMe Controllers 00:23:58.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:58.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:58.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:58.309 Initialization complete. Launching workers. 00:23:58.309 ======================================================== 00:23:58.309 Latency(us) 00:23:58.309 Device Information : IOPS MiB/s Average min max 00:23:58.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10974.29 42.87 2915.84 463.78 9218.52 00:23:58.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2631.04 10.28 12272.76 4874.62 24308.81 00:23:58.309 ======================================================== 00:23:58.309 Total : 13605.33 53.15 4725.30 463.78 24308.81 00:23:58.309 00:23:58.309 15:11:13 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:23:58.309 15:11:13 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:00.843 Initializing NVMe Controllers 00:24:00.843 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:00.843 Controller IO queue size 128, less than required. 00:24:00.843 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.843 Controller IO queue size 128, less than required. 00:24:00.843 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:00.843 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:00.843 Initialization complete. Launching workers. 00:24:00.843 ======================================================== 00:24:00.843 Latency(us) 00:24:00.843 Device Information : IOPS MiB/s Average min max 00:24:00.843 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1837.61 459.40 71122.63 49740.98 119654.97 00:24:00.843 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 578.88 144.72 231327.73 96077.42 377794.29 00:24:00.843 ======================================================== 00:24:00.843 Total : 2416.49 604.12 109500.27 49740.98 377794.29 00:24:00.843 00:24:00.843 15:11:16 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:00.843 No valid NVMe controllers or AIO or URING devices found 00:24:01.102 Initializing NVMe Controllers 00:24:01.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:01.102 Controller IO queue size 128, less than required. 00:24:01.102 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:01.102 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:01.102 Controller IO queue size 128, less than required. 00:24:01.102 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:01.102 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:24:01.102 WARNING: Some requested NVMe devices were skipped 00:24:01.102 15:11:16 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:03.643 Initializing NVMe Controllers 00:24:03.643 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.643 Controller IO queue size 128, less than required. 00:24:03.643 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:03.643 Controller IO queue size 128, less than required. 00:24:03.643 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:03.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:03.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:03.643 Initialization complete. Launching workers. 00:24:03.643 00:24:03.643 ==================== 00:24:03.643 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:03.643 TCP transport: 00:24:03.643 polls: 8087 00:24:03.643 idle_polls: 4544 00:24:03.643 sock_completions: 3543 00:24:03.643 nvme_completions: 5345 00:24:03.643 submitted_requests: 7992 00:24:03.643 queued_requests: 1 00:24:03.643 00:24:03.643 ==================== 00:24:03.643 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:03.643 TCP transport: 00:24:03.643 polls: 8365 00:24:03.643 idle_polls: 4865 00:24:03.643 sock_completions: 3500 00:24:03.643 nvme_completions: 6839 00:24:03.643 submitted_requests: 10170 00:24:03.643 queued_requests: 1 00:24:03.643 ======================================================== 00:24:03.643 Latency(us) 00:24:03.643 Device Information : IOPS MiB/s Average min max 00:24:03.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1335.84 333.96 98463.06 65328.04 158833.85 00:24:03.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1709.30 427.32 75817.19 43415.96 112374.38 00:24:03.643 ======================================================== 00:24:03.643 Total : 3045.14 761.28 85751.48 43415.96 158833.85 00:24:03.643 00:24:03.643 15:11:19 -- host/perf.sh@66 -- # sync 00:24:03.643 15:11:19 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.903 15:11:19 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:03.903 15:11:19 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:03.903 15:11:19 -- host/perf.sh@114 -- # nvmftestfini 00:24:03.903 15:11:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:03.903 15:11:19 -- nvmf/common.sh@117 -- # sync 00:24:03.903 15:11:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:03.903 15:11:19 -- nvmf/common.sh@120 -- # set +e 00:24:03.903 15:11:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:03.903 15:11:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:03.903 rmmod nvme_tcp 00:24:03.903 rmmod nvme_fabrics 00:24:03.903 rmmod nvme_keyring 00:24:03.903 15:11:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:03.903 15:11:19 -- nvmf/common.sh@124 -- # set -e 00:24:03.903 15:11:19 -- nvmf/common.sh@125 -- # return 0 00:24:03.903 15:11:19 -- nvmf/common.sh@478 -- # '[' -n 80444 ']' 00:24:03.903 15:11:19 -- nvmf/common.sh@479 -- # killprocess 80444 00:24:03.903 15:11:19 -- common/autotest_common.sh@936 -- # '[' -z 80444 ']' 00:24:03.903 15:11:19 -- common/autotest_common.sh@940 -- # kill -0 80444 00:24:03.903 15:11:19 -- common/autotest_common.sh@941 -- # uname 00:24:03.903 15:11:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:03.903 15:11:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80444 00:24:03.903 killing process with pid 80444 00:24:03.903 15:11:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:03.903 15:11:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:03.903 15:11:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80444' 00:24:03.903 15:11:19 -- common/autotest_common.sh@955 -- # kill 80444 00:24:03.903 15:11:19 -- common/autotest_common.sh@960 -- # wait 80444 00:24:04.470 15:11:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:04.470 15:11:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:04.470 15:11:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:04.471 15:11:20 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:04.471 15:11:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:04.471 15:11:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.471 15:11:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.471 15:11:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.729 15:11:20 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:04.729 00:24:04.729 real 0m14.365s 00:24:04.729 user 0m51.572s 00:24:04.729 sys 0m4.159s 00:24:04.729 15:11:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:04.729 15:11:20 -- common/autotest_common.sh@10 -- # set +x 00:24:04.729 ************************************ 00:24:04.729 END TEST nvmf_perf 00:24:04.729 ************************************ 00:24:04.729 15:11:20 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:04.729 15:11:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:04.729 15:11:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:04.729 15:11:20 -- common/autotest_common.sh@10 -- # set +x 00:24:04.729 ************************************ 00:24:04.729 START TEST nvmf_fio_host 00:24:04.729 ************************************ 00:24:04.729 15:11:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:04.988 * Looking for test storage... 00:24:04.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:04.988 15:11:20 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:04.988 15:11:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.988 15:11:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.988 15:11:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.988 15:11:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.988 15:11:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.988 15:11:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.988 15:11:20 -- paths/export.sh@5 -- # export PATH 00:24:04.988 15:11:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.988 15:11:20 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:04.988 15:11:20 -- nvmf/common.sh@7 -- # uname -s 00:24:04.988 15:11:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.988 15:11:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.988 15:11:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.988 15:11:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.988 15:11:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.988 15:11:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.988 15:11:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.988 15:11:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.988 15:11:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.988 15:11:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.988 15:11:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:24:04.988 15:11:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:24:04.988 15:11:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.988 15:11:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.988 15:11:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:04.988 15:11:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.988 15:11:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:04.988 15:11:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.988 15:11:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.988 15:11:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.988 15:11:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.988 15:11:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.988 15:11:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.988 15:11:20 -- paths/export.sh@5 -- # export PATH 00:24:04.988 15:11:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.988 15:11:20 -- nvmf/common.sh@47 -- # : 0 00:24:04.988 15:11:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.988 15:11:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.988 15:11:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.988 15:11:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.988 15:11:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.988 15:11:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.988 15:11:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.988 15:11:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.988 15:11:20 -- host/fio.sh@12 -- # nvmftestinit 00:24:04.988 15:11:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:04.988 15:11:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.988 15:11:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:04.988 15:11:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:04.988 15:11:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:04.988 15:11:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.988 15:11:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.988 15:11:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.988 15:11:20 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:04.988 15:11:20 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:04.988 15:11:20 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:04.988 15:11:20 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:04.988 15:11:20 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:04.988 15:11:20 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:04.988 15:11:20 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.988 15:11:20 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.988 15:11:20 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:04.988 15:11:20 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:04.988 15:11:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:04.988 15:11:20 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:04.988 15:11:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:04.988 15:11:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.988 15:11:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:04.988 15:11:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:04.988 15:11:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:04.988 15:11:20 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:04.988 15:11:20 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:04.988 15:11:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:04.988 Cannot find device "nvmf_tgt_br" 00:24:04.988 15:11:20 -- nvmf/common.sh@155 -- # true 00:24:04.988 15:11:20 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:04.988 Cannot find device "nvmf_tgt_br2" 00:24:04.988 15:11:20 -- nvmf/common.sh@156 -- # true 00:24:04.988 15:11:20 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:04.988 15:11:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:04.988 Cannot find device "nvmf_tgt_br" 00:24:04.988 15:11:20 -- nvmf/common.sh@158 -- # true 00:24:04.988 15:11:20 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:04.988 Cannot find device "nvmf_tgt_br2" 00:24:04.988 15:11:20 -- nvmf/common.sh@159 -- # true 00:24:04.988 15:11:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:04.988 15:11:20 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:05.247 15:11:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:05.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.247 15:11:20 -- nvmf/common.sh@162 -- # true 00:24:05.247 15:11:20 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:05.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.247 15:11:20 -- nvmf/common.sh@163 -- # true 00:24:05.247 15:11:20 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:05.247 15:11:20 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:05.247 15:11:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:05.247 15:11:20 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:05.247 15:11:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:05.247 15:11:20 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:05.247 15:11:20 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:05.247 15:11:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:05.247 15:11:20 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:05.247 15:11:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:05.247 15:11:20 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:05.247 15:11:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:05.247 15:11:20 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:05.247 15:11:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:05.247 15:11:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:05.247 15:11:20 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:05.247 15:11:20 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:05.247 15:11:20 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:05.247 15:11:20 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:05.247 15:11:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:05.247 15:11:20 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:05.247 15:11:20 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:05.247 15:11:20 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:05.247 15:11:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:05.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:24:05.247 00:24:05.247 --- 10.0.0.2 ping statistics --- 00:24:05.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.247 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:24:05.247 15:11:20 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:05.247 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:05.247 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:24:05.247 00:24:05.247 --- 10.0.0.3 ping statistics --- 00:24:05.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.247 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:24:05.247 15:11:20 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:05.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:24:05.247 00:24:05.247 --- 10.0.0.1 ping statistics --- 00:24:05.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.247 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:24:05.247 15:11:20 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.247 15:11:20 -- nvmf/common.sh@422 -- # return 0 00:24:05.247 15:11:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:05.247 15:11:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.247 15:11:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:05.247 15:11:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:05.247 15:11:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.247 15:11:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:05.247 15:11:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:05.506 15:11:20 -- host/fio.sh@14 -- # [[ y != y ]] 00:24:05.506 15:11:20 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:24:05.506 15:11:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:05.506 15:11:20 -- common/autotest_common.sh@10 -- # set +x 00:24:05.506 15:11:20 -- host/fio.sh@22 -- # nvmfpid=80936 00:24:05.506 15:11:20 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:05.506 15:11:20 -- host/fio.sh@26 -- # waitforlisten 80936 00:24:05.506 15:11:20 -- common/autotest_common.sh@817 -- # '[' -z 80936 ']' 00:24:05.506 15:11:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.506 15:11:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:05.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.506 15:11:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.506 15:11:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:05.506 15:11:20 -- common/autotest_common.sh@10 -- # set +x 00:24:05.506 15:11:20 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:05.506 [2024-04-18 15:11:21.034335] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:24:05.506 [2024-04-18 15:11:21.034428] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.506 [2024-04-18 15:11:21.180185] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.764 [2024-04-18 15:11:21.287715] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.764 [2024-04-18 15:11:21.287776] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.764 [2024-04-18 15:11:21.287787] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.764 [2024-04-18 15:11:21.287796] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.764 [2024-04-18 15:11:21.287803] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.764 [2024-04-18 15:11:21.288036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.764 [2024-04-18 15:11:21.288114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.764 [2024-04-18 15:11:21.289153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.764 [2024-04-18 15:11:21.289154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.329 15:11:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:06.329 15:11:21 -- common/autotest_common.sh@850 -- # return 0 00:24:06.329 15:11:21 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:06.329 15:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.329 15:11:21 -- common/autotest_common.sh@10 -- # set +x 00:24:06.329 [2024-04-18 15:11:21.966993] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.329 15:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.329 15:11:21 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:24:06.329 15:11:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:06.329 15:11:21 -- common/autotest_common.sh@10 -- # set +x 00:24:06.588 15:11:22 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:06.588 15:11:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.588 15:11:22 -- common/autotest_common.sh@10 -- # set +x 00:24:06.588 Malloc1 00:24:06.588 15:11:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.588 15:11:22 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:06.588 15:11:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.588 15:11:22 -- common/autotest_common.sh@10 -- # set +x 00:24:06.588 15:11:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.588 15:11:22 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:06.588 15:11:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.588 15:11:22 -- common/autotest_common.sh@10 -- # set +x 00:24:06.588 15:11:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.588 15:11:22 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:06.588 15:11:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.588 15:11:22 -- common/autotest_common.sh@10 -- # set +x 00:24:06.588 [2024-04-18 15:11:22.098428] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.588 15:11:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.588 15:11:22 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:06.588 15:11:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.588 15:11:22 -- common/autotest_common.sh@10 -- # set +x 00:24:06.588 15:11:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.588 15:11:22 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:24:06.588 15:11:22 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:06.588 15:11:22 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:06.588 15:11:22 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:24:06.588 15:11:22 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:06.588 15:11:22 -- common/autotest_common.sh@1325 -- # local sanitizers 00:24:06.588 15:11:22 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:06.588 15:11:22 -- common/autotest_common.sh@1327 -- # shift 00:24:06.588 15:11:22 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:24:06.588 15:11:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.588 15:11:22 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:06.588 15:11:22 -- common/autotest_common.sh@1331 -- # grep libasan 00:24:06.588 15:11:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:06.588 15:11:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:06.588 15:11:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:06.588 15:11:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.588 15:11:22 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:06.588 15:11:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:06.588 15:11:22 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:24:06.588 15:11:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:06.588 15:11:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:06.588 15:11:22 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:06.588 15:11:22 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:06.847 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:06.847 fio-3.35 00:24:06.847 Starting 1 thread 00:24:09.391 00:24:09.391 test: (groupid=0, jobs=1): err= 0: pid=81009: Thu Apr 18 15:11:24 2024 00:24:09.391 read: IOPS=11.4k, BW=44.5MiB/s (46.6MB/s)(89.2MiB/2005msec) 00:24:09.391 slat (nsec): min=1550, max=412016, avg=1699.06, stdev=3484.52 00:24:09.391 clat (usec): min=3856, max=11676, avg=5893.82, stdev=443.80 00:24:09.391 lat (usec): min=3858, max=11687, avg=5895.52, stdev=444.04 00:24:09.391 clat percentiles (usec): 00:24:09.391 | 1.00th=[ 5080], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5604], 00:24:09.391 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5866], 60.00th=[ 5932], 00:24:09.391 | 70.00th=[ 6063], 80.00th=[ 6194], 90.00th=[ 6325], 95.00th=[ 6456], 00:24:09.391 | 99.00th=[ 6849], 99.50th=[ 7898], 99.90th=[11076], 99.95th=[11469], 00:24:09.391 | 99.99th=[11600] 00:24:09.391 bw ( KiB/s): min=44392, max=46200, per=99.96%, avg=45522.00, stdev=849.54, samples=4 00:24:09.391 iops : min=11098, max=11550, avg=11380.50, stdev=212.39, samples=4 00:24:09.391 write: IOPS=11.3k, BW=44.2MiB/s (46.4MB/s)(88.6MiB/2005msec); 0 zone resets 00:24:09.391 slat (nsec): min=1600, max=297939, avg=1757.29, stdev=2171.22 00:24:09.391 clat (usec): min=2903, max=9959, avg=5341.38, stdev=361.26 00:24:09.391 lat (usec): min=2918, max=9961, avg=5343.14, stdev=361.43 00:24:09.391 clat percentiles (usec): 00:24:09.391 | 1.00th=[ 4555], 5.00th=[ 4817], 10.00th=[ 4948], 20.00th=[ 5080], 00:24:09.391 | 30.00th=[ 5211], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:24:09.391 | 70.00th=[ 5473], 80.00th=[ 5604], 90.00th=[ 5735], 95.00th=[ 5866], 00:24:09.391 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 8586], 99.95th=[ 9372], 00:24:09.391 | 99.99th=[ 9896] 00:24:09.391 bw ( KiB/s): min=44632, max=45760, per=99.98%, avg=45258.00, stdev=500.64, samples=4 00:24:09.391 iops : min=11158, max=11440, avg=11314.50, stdev=125.16, samples=4 00:24:09.391 lat (msec) : 4=0.21%, 10=99.70%, 20=0.09% 00:24:09.391 cpu : usr=66.67%, sys=25.65%, ctx=9, majf=0, minf=5 00:24:09.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:09.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:09.391 issued rwts: total=22826,22691,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.391 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:09.391 00:24:09.391 Run status group 0 (all jobs): 00:24:09.391 READ: bw=44.5MiB/s (46.6MB/s), 44.5MiB/s-44.5MiB/s (46.6MB/s-46.6MB/s), io=89.2MiB (93.5MB), run=2005-2005msec 00:24:09.391 WRITE: bw=44.2MiB/s (46.4MB/s), 44.2MiB/s-44.2MiB/s (46.4MB/s-46.4MB/s), io=88.6MiB (92.9MB), run=2005-2005msec 00:24:09.391 15:11:24 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:09.391 15:11:24 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:09.391 15:11:24 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:24:09.391 15:11:24 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:09.391 15:11:24 -- common/autotest_common.sh@1325 -- # local sanitizers 00:24:09.391 15:11:24 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:09.391 15:11:24 -- common/autotest_common.sh@1327 -- # shift 00:24:09.391 15:11:24 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:24:09.391 15:11:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:09.391 15:11:24 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:09.391 15:11:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:09.391 15:11:24 -- common/autotest_common.sh@1331 -- # grep libasan 00:24:09.391 15:11:24 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:09.391 15:11:24 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:09.391 15:11:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:09.391 15:11:24 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:24:09.391 15:11:24 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:09.391 15:11:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:09.391 15:11:24 -- common/autotest_common.sh@1331 -- # asan_lib= 00:24:09.391 15:11:24 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:24:09.391 15:11:24 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:09.391 15:11:24 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:09.391 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:09.391 fio-3.35 00:24:09.391 Starting 1 thread 00:24:11.968 00:24:11.968 test: (groupid=0, jobs=1): err= 0: pid=81052: Thu Apr 18 15:11:27 2024 00:24:11.968 read: IOPS=10.3k, BW=161MiB/s (169MB/s)(323MiB/2005msec) 00:24:11.968 slat (usec): min=2, max=100, avg= 2.84, stdev= 1.70 00:24:11.968 clat (usec): min=2097, max=28947, avg=7436.04, stdev=2092.42 00:24:11.968 lat (usec): min=2099, max=28950, avg=7438.88, stdev=2092.59 00:24:11.968 clat percentiles (usec): 00:24:11.968 | 1.00th=[ 3785], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 5735], 00:24:11.968 | 30.00th=[ 6194], 40.00th=[ 6718], 50.00th=[ 7308], 60.00th=[ 8029], 00:24:11.968 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9503], 95.00th=[10421], 00:24:11.968 | 99.00th=[12780], 99.50th=[15401], 99.90th=[25035], 99.95th=[27657], 00:24:11.968 | 99.99th=[28967] 00:24:11.968 bw ( KiB/s): min=71424, max=95552, per=49.29%, avg=81240.00, stdev=11258.40, samples=4 00:24:11.968 iops : min= 4464, max= 5972, avg=5077.50, stdev=703.65, samples=4 00:24:11.968 write: IOPS=6285, BW=98.2MiB/s (103MB/s)(166MiB/1693msec); 0 zone resets 00:24:11.968 slat (usec): min=28, max=367, avg=31.02, stdev= 8.62 00:24:11.968 clat (usec): min=3322, max=28488, avg=8918.95, stdev=1806.69 00:24:11.968 lat (usec): min=3360, max=28520, avg=8949.96, stdev=1809.00 00:24:11.968 clat percentiles (usec): 00:24:11.968 | 1.00th=[ 5997], 5.00th=[ 6849], 10.00th=[ 7177], 20.00th=[ 7570], 00:24:11.968 | 30.00th=[ 7963], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 8979], 00:24:11.968 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[11076], 95.00th=[11863], 00:24:11.968 | 99.00th=[13960], 99.50th=[15008], 99.90th=[27657], 99.95th=[27919], 00:24:11.968 | 99.99th=[28443] 00:24:11.968 bw ( KiB/s): min=75200, max=99040, per=84.11%, avg=84592.00, stdev=11337.65, samples=4 00:24:11.968 iops : min= 4700, max= 6190, avg=5287.00, stdev=708.60, samples=4 00:24:11.968 lat (msec) : 4=0.96%, 10=87.54%, 20=11.18%, 50=0.32% 00:24:11.968 cpu : usr=71.07%, sys=19.15%, ctx=22, majf=0, minf=31 00:24:11.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:11.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:11.968 issued rwts: total=20654,10642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:11.968 00:24:11.968 Run status group 0 (all jobs): 00:24:11.968 READ: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=323MiB (338MB), run=2005-2005msec 00:24:11.968 WRITE: bw=98.2MiB/s (103MB/s), 98.2MiB/s-98.2MiB/s (103MB/s-103MB/s), io=166MiB (174MB), run=1693-1693msec 00:24:11.968 15:11:27 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.968 15:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.968 15:11:27 -- common/autotest_common.sh@10 -- # set +x 00:24:11.968 15:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.968 15:11:27 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:24:11.968 15:11:27 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:24:11.968 15:11:27 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:24:11.968 15:11:27 -- host/fio.sh@84 -- # nvmftestfini 00:24:11.968 15:11:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:11.968 15:11:27 -- nvmf/common.sh@117 -- # sync 00:24:11.968 15:11:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:11.968 15:11:27 -- nvmf/common.sh@120 -- # set +e 00:24:11.968 15:11:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:11.968 15:11:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:11.968 rmmod nvme_tcp 00:24:11.968 rmmod nvme_fabrics 00:24:11.968 rmmod nvme_keyring 00:24:11.968 15:11:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:11.968 15:11:27 -- nvmf/common.sh@124 -- # set -e 00:24:11.968 15:11:27 -- nvmf/common.sh@125 -- # return 0 00:24:11.968 15:11:27 -- nvmf/common.sh@478 -- # '[' -n 80936 ']' 00:24:11.968 15:11:27 -- nvmf/common.sh@479 -- # killprocess 80936 00:24:11.968 15:11:27 -- common/autotest_common.sh@936 -- # '[' -z 80936 ']' 00:24:11.968 15:11:27 -- common/autotest_common.sh@940 -- # kill -0 80936 00:24:11.968 15:11:27 -- common/autotest_common.sh@941 -- # uname 00:24:11.968 15:11:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:11.968 15:11:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80936 00:24:11.968 killing process with pid 80936 00:24:11.968 15:11:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:11.968 15:11:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:11.968 15:11:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80936' 00:24:11.968 15:11:27 -- common/autotest_common.sh@955 -- # kill 80936 00:24:11.968 15:11:27 -- common/autotest_common.sh@960 -- # wait 80936 00:24:11.968 15:11:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:11.968 15:11:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:11.968 15:11:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:11.968 15:11:27 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:11.968 15:11:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:11.968 15:11:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.968 15:11:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:11.968 15:11:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.968 15:11:27 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:11.968 00:24:11.968 real 0m7.207s 00:24:11.968 user 0m27.446s 00:24:11.968 sys 0m2.343s 00:24:11.968 15:11:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:11.968 ************************************ 00:24:11.968 END TEST nvmf_fio_host 00:24:11.968 ************************************ 00:24:11.968 15:11:27 -- common/autotest_common.sh@10 -- # set +x 00:24:11.968 15:11:27 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:11.968 15:11:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:11.968 15:11:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:11.968 15:11:27 -- common/autotest_common.sh@10 -- # set +x 00:24:12.227 ************************************ 00:24:12.227 START TEST nvmf_failover 00:24:12.227 ************************************ 00:24:12.227 15:11:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:12.227 * Looking for test storage... 00:24:12.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:12.227 15:11:27 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:12.227 15:11:27 -- nvmf/common.sh@7 -- # uname -s 00:24:12.227 15:11:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.227 15:11:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.227 15:11:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.227 15:11:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.227 15:11:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.227 15:11:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.227 15:11:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.227 15:11:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.227 15:11:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.227 15:11:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.227 15:11:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:24:12.228 15:11:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:24:12.228 15:11:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.228 15:11:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.228 15:11:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:12.228 15:11:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.228 15:11:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:12.228 15:11:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.228 15:11:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.228 15:11:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.228 15:11:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.228 15:11:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.228 15:11:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.228 15:11:27 -- paths/export.sh@5 -- # export PATH 00:24:12.228 15:11:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.228 15:11:27 -- nvmf/common.sh@47 -- # : 0 00:24:12.228 15:11:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.228 15:11:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.228 15:11:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.228 15:11:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.228 15:11:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.228 15:11:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.228 15:11:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.228 15:11:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.228 15:11:27 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:12.228 15:11:27 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:12.228 15:11:27 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:12.228 15:11:27 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.228 15:11:27 -- host/failover.sh@18 -- # nvmftestinit 00:24:12.228 15:11:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:12.228 15:11:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.228 15:11:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:12.228 15:11:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:12.228 15:11:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:12.228 15:11:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.228 15:11:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.228 15:11:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.228 15:11:27 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:12.228 15:11:27 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:12.228 15:11:27 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:12.228 15:11:27 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:12.228 15:11:27 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:12.228 15:11:27 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:12.228 15:11:27 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.228 15:11:27 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.228 15:11:27 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:12.228 15:11:27 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:12.228 15:11:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:12.228 15:11:27 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:12.228 15:11:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:12.228 15:11:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.228 15:11:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:12.228 15:11:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:12.228 15:11:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:12.228 15:11:27 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:12.228 15:11:27 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:12.487 15:11:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:12.487 Cannot find device "nvmf_tgt_br" 00:24:12.487 15:11:27 -- nvmf/common.sh@155 -- # true 00:24:12.487 15:11:27 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:12.487 Cannot find device "nvmf_tgt_br2" 00:24:12.487 15:11:27 -- nvmf/common.sh@156 -- # true 00:24:12.487 15:11:27 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:12.487 15:11:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:12.487 Cannot find device "nvmf_tgt_br" 00:24:12.487 15:11:28 -- nvmf/common.sh@158 -- # true 00:24:12.487 15:11:28 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:12.487 Cannot find device "nvmf_tgt_br2" 00:24:12.487 15:11:28 -- nvmf/common.sh@159 -- # true 00:24:12.487 15:11:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:12.487 15:11:28 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:12.487 15:11:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:12.487 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:12.487 15:11:28 -- nvmf/common.sh@162 -- # true 00:24:12.487 15:11:28 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:12.487 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:12.487 15:11:28 -- nvmf/common.sh@163 -- # true 00:24:12.487 15:11:28 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:12.487 15:11:28 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:12.487 15:11:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:12.487 15:11:28 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:12.487 15:11:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:12.488 15:11:28 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:12.488 15:11:28 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:12.488 15:11:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:12.488 15:11:28 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:12.748 15:11:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:12.748 15:11:28 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:12.748 15:11:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:12.748 15:11:28 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:12.748 15:11:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:12.748 15:11:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:12.748 15:11:28 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:12.748 15:11:28 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:12.748 15:11:28 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:12.748 15:11:28 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:12.748 15:11:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:12.748 15:11:28 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:12.748 15:11:28 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:12.748 15:11:28 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:12.748 15:11:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:12.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:24:12.748 00:24:12.748 --- 10.0.0.2 ping statistics --- 00:24:12.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.748 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:24:12.748 15:11:28 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:12.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:12.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:24:12.748 00:24:12.748 --- 10.0.0.3 ping statistics --- 00:24:12.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.748 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:24:12.748 15:11:28 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:12.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:24:12.748 00:24:12.748 --- 10.0.0.1 ping statistics --- 00:24:12.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.748 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:12.748 15:11:28 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.748 15:11:28 -- nvmf/common.sh@422 -- # return 0 00:24:12.748 15:11:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:12.748 15:11:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.748 15:11:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:12.748 15:11:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:12.748 15:11:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.748 15:11:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:12.748 15:11:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:12.748 15:11:28 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:12.748 15:11:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:12.748 15:11:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:12.748 15:11:28 -- common/autotest_common.sh@10 -- # set +x 00:24:12.748 15:11:28 -- nvmf/common.sh@470 -- # nvmfpid=81272 00:24:12.748 15:11:28 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:12.748 15:11:28 -- nvmf/common.sh@471 -- # waitforlisten 81272 00:24:12.748 15:11:28 -- common/autotest_common.sh@817 -- # '[' -z 81272 ']' 00:24:12.748 15:11:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.748 15:11:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:12.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.748 15:11:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.748 15:11:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:12.748 15:11:28 -- common/autotest_common.sh@10 -- # set +x 00:24:12.748 [2024-04-18 15:11:28.395800] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:24:12.748 [2024-04-18 15:11:28.395881] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.008 [2024-04-18 15:11:28.540739] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:13.008 [2024-04-18 15:11:28.632871] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.008 [2024-04-18 15:11:28.632927] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.008 [2024-04-18 15:11:28.632937] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.008 [2024-04-18 15:11:28.632946] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.008 [2024-04-18 15:11:28.632953] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.008 [2024-04-18 15:11:28.633131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.008 [2024-04-18 15:11:28.634069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.008 [2024-04-18 15:11:28.634071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.576 15:11:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:13.576 15:11:29 -- common/autotest_common.sh@850 -- # return 0 00:24:13.576 15:11:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:13.576 15:11:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:13.576 15:11:29 -- common/autotest_common.sh@10 -- # set +x 00:24:13.835 15:11:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.835 15:11:29 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:13.835 [2024-04-18 15:11:29.470420] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.835 15:11:29 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:14.094 Malloc0 00:24:14.094 15:11:29 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:14.353 15:11:29 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:14.612 15:11:30 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.612 [2024-04-18 15:11:30.271262] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.612 15:11:30 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:14.872 [2024-04-18 15:11:30.459052] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:14.872 15:11:30 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:15.132 [2024-04-18 15:11:30.746926] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:15.132 15:11:30 -- host/failover.sh@31 -- # bdevperf_pid=81382 00:24:15.132 15:11:30 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:15.132 15:11:30 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:15.132 15:11:30 -- host/failover.sh@34 -- # waitforlisten 81382 /var/tmp/bdevperf.sock 00:24:15.132 15:11:30 -- common/autotest_common.sh@817 -- # '[' -z 81382 ']' 00:24:15.132 15:11:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.132 15:11:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:15.132 15:11:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.132 15:11:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:15.132 15:11:30 -- common/autotest_common.sh@10 -- # set +x 00:24:16.069 15:11:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:16.069 15:11:31 -- common/autotest_common.sh@850 -- # return 0 00:24:16.069 15:11:31 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:16.328 NVMe0n1 00:24:16.329 15:11:31 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:16.587 00:24:16.587 15:11:32 -- host/failover.sh@39 -- # run_test_pid=81431 00:24:16.587 15:11:32 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:16.587 15:11:32 -- host/failover.sh@41 -- # sleep 1 00:24:17.966 15:11:33 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.966 [2024-04-18 15:11:33.454332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f2c30 is same with the state(5) to be set 00:24:17.966 [2024-04-18 15:11:33.454394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f2c30 is same with the state(5) to be set 00:24:17.966 [2024-04-18 15:11:33.454405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f2c30 is same with the state(5) to be set 00:24:17.966 [2024-04-18 15:11:33.454414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f2c30 is same with the state(5) to be set 00:24:17.966 [2024-04-18 15:11:33.454422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f2c30 is same with the state(5) to be set 00:24:17.966 [2024-04-18 15:11:33.454430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f2c30 is same with the state(5) to be set 00:24:17.966 [2024-04-18 15:11:33.454439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f2c30 is same with the state(5) to be set 00:24:17.966 15:11:33 -- host/failover.sh@45 -- # sleep 3 00:24:21.253 15:11:36 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:21.253 00:24:21.253 15:11:36 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:21.513 [2024-04-18 15:11:37.009112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f30e0 is same with the state(5) to be set 00:24:21.513 [2024-04-18 15:11:37.009175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f30e0 is same with the state(5) to be set 00:24:21.513 [2024-04-18 15:11:37.009185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f30e0 is same with the state(5) to be set 00:24:21.513 [2024-04-18 15:11:37.009195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f30e0 is same with the state(5) to be set 00:24:21.513 [2024-04-18 15:11:37.009204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f30e0 is same with the state(5) to be set 00:24:21.513 [2024-04-18 15:11:37.009213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f30e0 is same with the state(5) to be set 00:24:21.513 [2024-04-18 15:11:37.009223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f30e0 is same with the state(5) to be set 00:24:21.513 15:11:37 -- host/failover.sh@50 -- # sleep 3 00:24:24.796 15:11:40 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.796 [2024-04-18 15:11:40.238817] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.796 15:11:40 -- host/failover.sh@55 -- # sleep 1 00:24:25.729 15:11:41 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:25.988 [2024-04-18 15:11:41.479476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.988 [2024-04-18 15:11:41.479552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.988 [2024-04-18 15:11:41.479564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.988 [2024-04-18 15:11:41.479574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.988 [2024-04-18 15:11:41.479583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.988 [2024-04-18 15:11:41.479592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.988 [2024-04-18 15:11:41.479601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.988 [2024-04-18 15:11:41.479612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.988 [2024-04-18 15:11:41.479621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479898] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.479993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.480001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.480010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.480019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.480027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.480036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.480044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.480053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.480061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.480070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.480079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.480088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.480113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.480122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.989 [2024-04-18 15:11:41.480131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.990 [2024-04-18 15:11:41.480140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.990 [2024-04-18 15:11:41.480149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.990 [2024-04-18 15:11:41.480157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.990 [2024-04-18 15:11:41.480165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.990 [2024-04-18 15:11:41.480174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.990 [2024-04-18 15:11:41.480182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.990 [2024-04-18 15:11:41.480191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.990 [2024-04-18 15:11:41.480199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.990 [2024-04-18 15:11:41.480208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.990 [2024-04-18 15:11:41.480217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.990 [2024-04-18 15:11:41.480225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.990 [2024-04-18 15:11:41.480233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.990 [2024-04-18 15:11:41.480242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229a760 is same with the state(5) to be set 00:24:25.990 15:11:41 -- host/failover.sh@59 -- # wait 81431 00:24:32.558 0 00:24:32.558 15:11:47 -- host/failover.sh@61 -- # killprocess 81382 00:24:32.558 15:11:47 -- common/autotest_common.sh@936 -- # '[' -z 81382 ']' 00:24:32.558 15:11:47 -- common/autotest_common.sh@940 -- # kill -0 81382 00:24:32.558 15:11:47 -- common/autotest_common.sh@941 -- # uname 00:24:32.558 15:11:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:32.558 15:11:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81382 00:24:32.558 killing process with pid 81382 00:24:32.558 15:11:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:32.558 15:11:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:32.558 15:11:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81382' 00:24:32.558 15:11:47 -- common/autotest_common.sh@955 -- # kill 81382 00:24:32.558 15:11:47 -- common/autotest_common.sh@960 -- # wait 81382 00:24:32.558 15:11:47 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:32.558 [2024-04-18 15:11:30.827790] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:24:32.558 [2024-04-18 15:11:30.828001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81382 ] 00:24:32.558 [2024-04-18 15:11:30.970485] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.558 [2024-04-18 15:11:31.077861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.558 Running I/O for 15 seconds... 00:24:32.558 [2024-04-18 15:11:33.454665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.454716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.454741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.454755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.454770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.454783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.454797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.454810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.454823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.454836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.454849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.454861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.454875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.454888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.454901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.454914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.454927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.454939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.454953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.454966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.454979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.454991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.455034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.455046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.455060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.455072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.455086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.455098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.455112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.455124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.455138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.455150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.455163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.455179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.455193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.455205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.455219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.455231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.455244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.455256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.455270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.558 [2024-04-18 15:11:33.455282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.558 [2024-04-18 15:11:33.455296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.559 [2024-04-18 15:11:33.455308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.559 [2024-04-18 15:11:33.455333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.559 [2024-04-18 15:11:33.455365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.559 [2024-04-18 15:11:33.455390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.559 [2024-04-18 15:11:33.455417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.559 [2024-04-18 15:11:33.455442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.559 [2024-04-18 15:11:33.455468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.559 [2024-04-18 15:11:33.455494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.559 [2024-04-18 15:11:33.455519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.559 [2024-04-18 15:11:33.455553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.455977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.455990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.456003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.456015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.456029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.456052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.456067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.456079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.456093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.456105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.456119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.456131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.456145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.456157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.456170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.456182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.456196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.456208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.456222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.456234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.456247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.456259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.456274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.456286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.456299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.456314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.456328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.456340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.559 [2024-04-18 15:11:33.456354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.559 [2024-04-18 15:11:33.456366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.560 [2024-04-18 15:11:33.456425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.456977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.456991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.457003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.457029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.457055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.457086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.560 [2024-04-18 15:11:33.457112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.560 [2024-04-18 15:11:33.457137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.560 [2024-04-18 15:11:33.457163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.560 [2024-04-18 15:11:33.457189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.560 [2024-04-18 15:11:33.457215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.560 [2024-04-18 15:11:33.457242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.560 [2024-04-18 15:11:33.457268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.560 [2024-04-18 15:11:33.457294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.560 [2024-04-18 15:11:33.457320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.457353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.457379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.457405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.560 [2024-04-18 15:11:33.457436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.560 [2024-04-18 15:11:33.457450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.457966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.457990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.458003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.458017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.458030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.458044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.458056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.458070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.458083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.458096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.458108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.458127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.458139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.458152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.458165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.458178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:33.458190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.458203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c93b0 is same with the state(5) to be set 00:24:32.561 [2024-04-18 15:11:33.458220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.561 [2024-04-18 15:11:33.458230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.561 [2024-04-18 15:11:33.458248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101360 len:8 PRP1 0x0 PRP2 0x0 00:24:32.561 [2024-04-18 15:11:33.458261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.458315] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10c93b0 was disconnected and freed. reset controller. 00:24:32.561 [2024-04-18 15:11:33.458331] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:32.561 [2024-04-18 15:11:33.458382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.561 [2024-04-18 15:11:33.458397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.458412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.561 [2024-04-18 15:11:33.458424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.458438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.561 [2024-04-18 15:11:33.458450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.458463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.561 [2024-04-18 15:11:33.458476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:33.458489] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.561 [2024-04-18 15:11:33.461205] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.561 [2024-04-18 15:11:33.461248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1061740 (9): Bad file descriptor 00:24:32.561 [2024-04-18 15:11:33.490098] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:32.561 [2024-04-18 15:11:37.009335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.561 [2024-04-18 15:11:37.009394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:37.009470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:37.009485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:37.009501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:37.009515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:37.009530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.561 [2024-04-18 15:11:37.009544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.561 [2024-04-18 15:11:37.009570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.009584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.009599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.009622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.009638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.009651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.009666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.009680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.009695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.009709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.009724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.009737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.009752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.009766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.009781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.009794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.009809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.009822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.009837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.009850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.009872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.009885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.009900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.009913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.009928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.009949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.009965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.009978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.009994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.562 [2024-04-18 15:11:37.010529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.562 [2024-04-18 15:11:37.010552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.010566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.010581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.010594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.010615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.010628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.010643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.010657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.010672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.010686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.010701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.010715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.010731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.010744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.010759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.010773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.010788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.010801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.010816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.010829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.010845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.010859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.010874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.010891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.010906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.010920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.010935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.010948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.010963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.010983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.010998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.563 [2024-04-18 15:11:37.011154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.563 [2024-04-18 15:11:37.011182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.563 [2024-04-18 15:11:37.011210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.563 [2024-04-18 15:11:37.011239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.563 [2024-04-18 15:11:37.011268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.563 [2024-04-18 15:11:37.011296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.563 [2024-04-18 15:11:37.011325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.563 [2024-04-18 15:11:37.011725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.563 [2024-04-18 15:11:37.011739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.011759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.011772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.011787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.011800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.011815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.011828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.011843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.011857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.011872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.011886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.011901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.011914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.011928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.011942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.011956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.011969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.011984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.011996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.564 [2024-04-18 15:11:37.012879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.564 [2024-04-18 15:11:37.012892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.012907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:37.012920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.012935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:37.012965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.012981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:37.012995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.013010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:37.013024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.013039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:37.013052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.013067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:37.013081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.013096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:37.013109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.013125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:37.013138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.013153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:37.013166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.013181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:37.013195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.013215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1061df0 is same with the state(5) to be set 00:24:32.565 [2024-04-18 15:11:37.013234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.565 [2024-04-18 15:11:37.013244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.565 [2024-04-18 15:11:37.013255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26040 len:8 PRP1 0x0 PRP2 0x0 00:24:32.565 [2024-04-18 15:11:37.013269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.013332] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1061df0 was disconnected and freed. reset controller. 00:24:32.565 [2024-04-18 15:11:37.013351] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:32.565 [2024-04-18 15:11:37.013411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.565 [2024-04-18 15:11:37.013428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.013443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.565 [2024-04-18 15:11:37.013457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.013471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.565 [2024-04-18 15:11:37.013485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.013500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.565 [2024-04-18 15:11:37.013513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:37.013527] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.565 [2024-04-18 15:11:37.013577] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1061740 (9): Bad file descriptor 00:24:32.565 [2024-04-18 15:11:37.016656] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.565 [2024-04-18 15:11:37.050322] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:32.565 [2024-04-18 15:11:41.480546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.565 [2024-04-18 15:11:41.480611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.480630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.565 [2024-04-18 15:11:41.480644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.480659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.565 [2024-04-18 15:11:41.480673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.480688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.565 [2024-04-18 15:11:41.480702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.480716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1061740 is same with the state(5) to be set 00:24:32.565 [2024-04-18 15:11:41.480823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.480839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.480861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.480876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.480893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.480906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.480922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.480935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.480951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.480964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.480979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.480993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.481007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.481021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.481036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.481050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.481065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.481078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.481093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.481106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.481121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.481134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.481149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.481163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.481180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.481201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.481228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.481241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.481256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.481269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.481284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.565 [2024-04-18 15:11:41.481298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.565 [2024-04-18 15:11:41.481312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.481974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.481989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.482003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.482031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.482059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.482087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.482115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.482144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.482174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.482202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.482231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.482259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.566 [2024-04-18 15:11:41.482287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.566 [2024-04-18 15:11:41.482316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.566 [2024-04-18 15:11:41.482350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.566 [2024-04-18 15:11:41.482384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.566 [2024-04-18 15:11:41.482425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.566 [2024-04-18 15:11:41.482454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.566 [2024-04-18 15:11:41.482483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.566 [2024-04-18 15:11:41.482511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.566 [2024-04-18 15:11:41.482527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.482566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.482594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.482623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.482653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.482682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.482711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.482759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.482787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.482815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.482842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.482872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.482900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.482927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.482955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.482982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.482996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.567 [2024-04-18 15:11:41.483524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.567 [2024-04-18 15:11:41.483538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.483977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.483990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.568 [2024-04-18 15:11:41.484603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.568 [2024-04-18 15:11:41.484647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.568 [2024-04-18 15:11:41.484657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:8 PRP1 0x0 PRP2 0x0 00:24:32.568 [2024-04-18 15:11:41.484670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.568 [2024-04-18 15:11:41.484726] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10ccac0 was disconnected and freed. reset controller. 00:24:32.568 [2024-04-18 15:11:41.484743] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:32.568 [2024-04-18 15:11:41.484758] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.568 [2024-04-18 15:11:41.487771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.568 [2024-04-18 15:11:41.487829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1061740 (9): Bad file descriptor 00:24:32.569 [2024-04-18 15:11:41.523022] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:32.569 00:24:32.569 Latency(us) 00:24:32.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.569 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:32.569 Verification LBA range: start 0x0 length 0x4000 00:24:32.569 NVMe0n1 : 15.00 10522.74 41.10 276.53 0.00 11829.66 506.65 17792.10 00:24:32.569 =================================================================================================================== 00:24:32.569 Total : 10522.74 41.10 276.53 0.00 11829.66 506.65 17792.10 00:24:32.569 Received shutdown signal, test time was about 15.000000 seconds 00:24:32.569 00:24:32.569 Latency(us) 00:24:32.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.569 =================================================================================================================== 00:24:32.569 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:32.569 15:11:47 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:32.569 15:11:47 -- host/failover.sh@65 -- # count=3 00:24:32.569 15:11:47 -- host/failover.sh@67 -- # (( count != 3 )) 00:24:32.569 15:11:47 -- host/failover.sh@73 -- # bdevperf_pid=81634 00:24:32.569 15:11:47 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:32.569 15:11:47 -- host/failover.sh@75 -- # waitforlisten 81634 /var/tmp/bdevperf.sock 00:24:32.569 15:11:47 -- common/autotest_common.sh@817 -- # '[' -z 81634 ']' 00:24:32.569 15:11:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.569 15:11:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:32.569 15:11:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.569 15:11:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:32.569 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:24:33.173 15:11:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:33.173 15:11:48 -- common/autotest_common.sh@850 -- # return 0 00:24:33.173 15:11:48 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:33.173 [2024-04-18 15:11:48.862030] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:33.430 15:11:48 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:33.430 [2024-04-18 15:11:49.053964] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:33.430 15:11:49 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:33.688 NVMe0n1 00:24:33.688 15:11:49 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:33.945 00:24:34.204 15:11:49 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.462 00:24:34.462 15:11:49 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:34.462 15:11:49 -- host/failover.sh@82 -- # grep -q NVMe0 00:24:34.462 15:11:50 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.720 15:11:50 -- host/failover.sh@87 -- # sleep 3 00:24:38.011 15:11:53 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:38.011 15:11:53 -- host/failover.sh@88 -- # grep -q NVMe0 00:24:38.011 15:11:53 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:38.011 15:11:53 -- host/failover.sh@90 -- # run_test_pid=81765 00:24:38.011 15:11:53 -- host/failover.sh@92 -- # wait 81765 00:24:39.386 0 00:24:39.386 15:11:54 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:39.386 [2024-04-18 15:11:47.729176] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:24:39.386 [2024-04-18 15:11:47.729918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81634 ] 00:24:39.386 [2024-04-18 15:11:47.862665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.386 [2024-04-18 15:11:47.975524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.386 [2024-04-18 15:11:50.338040] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:39.386 [2024-04-18 15:11:50.338168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.386 [2024-04-18 15:11:50.338201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.386 [2024-04-18 15:11:50.338220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.386 [2024-04-18 15:11:50.338234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.386 [2024-04-18 15:11:50.338249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.386 [2024-04-18 15:11:50.338263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.386 [2024-04-18 15:11:50.338277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.386 [2024-04-18 15:11:50.338291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.386 [2024-04-18 15:11:50.338306] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.386 [2024-04-18 15:11:50.338348] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.386 [2024-04-18 15:11:50.338373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168f740 (9): Bad file descriptor 00:24:39.386 [2024-04-18 15:11:50.346095] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:39.386 Running I/O for 1 seconds... 00:24:39.386 00:24:39.386 Latency(us) 00:24:39.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.386 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:39.386 Verification LBA range: start 0x0 length 0x4000 00:24:39.386 NVMe0n1 : 1.01 10309.88 40.27 0.00 0.00 12369.59 1763.42 12896.64 00:24:39.386 =================================================================================================================== 00:24:39.386 Total : 10309.88 40.27 0.00 0.00 12369.59 1763.42 12896.64 00:24:39.386 15:11:54 -- host/failover.sh@95 -- # grep -q NVMe0 00:24:39.386 15:11:54 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:39.386 15:11:54 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:39.644 15:11:55 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:39.644 15:11:55 -- host/failover.sh@99 -- # grep -q NVMe0 00:24:39.902 15:11:55 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.160 15:11:55 -- host/failover.sh@101 -- # sleep 3 00:24:43.449 15:11:58 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:43.449 15:11:58 -- host/failover.sh@103 -- # grep -q NVMe0 00:24:43.449 15:11:58 -- host/failover.sh@108 -- # killprocess 81634 00:24:43.449 15:11:58 -- common/autotest_common.sh@936 -- # '[' -z 81634 ']' 00:24:43.449 15:11:58 -- common/autotest_common.sh@940 -- # kill -0 81634 00:24:43.449 15:11:58 -- common/autotest_common.sh@941 -- # uname 00:24:43.449 15:11:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:43.449 15:11:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81634 00:24:43.449 killing process with pid 81634 00:24:43.449 15:11:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:43.449 15:11:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:43.449 15:11:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81634' 00:24:43.449 15:11:58 -- common/autotest_common.sh@955 -- # kill 81634 00:24:43.449 15:11:58 -- common/autotest_common.sh@960 -- # wait 81634 00:24:43.708 15:11:59 -- host/failover.sh@110 -- # sync 00:24:43.708 15:11:59 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:43.966 15:11:59 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:43.966 15:11:59 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:43.966 15:11:59 -- host/failover.sh@116 -- # nvmftestfini 00:24:43.966 15:11:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:43.966 15:11:59 -- nvmf/common.sh@117 -- # sync 00:24:43.966 15:11:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:43.966 15:11:59 -- nvmf/common.sh@120 -- # set +e 00:24:43.966 15:11:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:43.966 15:11:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:43.966 rmmod nvme_tcp 00:24:43.966 rmmod nvme_fabrics 00:24:43.966 rmmod nvme_keyring 00:24:43.966 15:11:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:43.966 15:11:59 -- nvmf/common.sh@124 -- # set -e 00:24:43.966 15:11:59 -- nvmf/common.sh@125 -- # return 0 00:24:43.966 15:11:59 -- nvmf/common.sh@478 -- # '[' -n 81272 ']' 00:24:43.966 15:11:59 -- nvmf/common.sh@479 -- # killprocess 81272 00:24:43.966 15:11:59 -- common/autotest_common.sh@936 -- # '[' -z 81272 ']' 00:24:43.966 15:11:59 -- common/autotest_common.sh@940 -- # kill -0 81272 00:24:43.966 15:11:59 -- common/autotest_common.sh@941 -- # uname 00:24:43.966 15:11:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:43.966 15:11:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81272 00:24:43.966 killing process with pid 81272 00:24:43.966 15:11:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:43.966 15:11:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:43.966 15:11:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81272' 00:24:43.966 15:11:59 -- common/autotest_common.sh@955 -- # kill 81272 00:24:43.966 15:11:59 -- common/autotest_common.sh@960 -- # wait 81272 00:24:44.225 15:11:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:44.225 15:11:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:44.225 15:11:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:44.225 15:11:59 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:44.225 15:11:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:44.225 15:11:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.225 15:11:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.225 15:11:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.225 15:11:59 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:44.484 00:24:44.484 real 0m32.187s 00:24:44.484 user 2m3.039s 00:24:44.484 sys 0m5.700s 00:24:44.484 15:11:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:44.484 15:11:59 -- common/autotest_common.sh@10 -- # set +x 00:24:44.484 ************************************ 00:24:44.484 END TEST nvmf_failover 00:24:44.484 ************************************ 00:24:44.484 15:11:59 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:44.484 15:11:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:44.484 15:11:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:44.484 15:11:59 -- common/autotest_common.sh@10 -- # set +x 00:24:44.484 ************************************ 00:24:44.484 START TEST nvmf_discovery 00:24:44.484 ************************************ 00:24:44.484 15:12:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:44.743 * Looking for test storage... 00:24:44.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:44.743 15:12:00 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:44.743 15:12:00 -- nvmf/common.sh@7 -- # uname -s 00:24:44.743 15:12:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.743 15:12:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.743 15:12:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.743 15:12:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.743 15:12:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.743 15:12:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.743 15:12:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.743 15:12:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.743 15:12:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.743 15:12:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.743 15:12:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:24:44.743 15:12:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:24:44.743 15:12:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.743 15:12:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.743 15:12:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:44.743 15:12:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.743 15:12:00 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:44.743 15:12:00 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.743 15:12:00 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.743 15:12:00 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.743 15:12:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.743 15:12:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.743 15:12:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.743 15:12:00 -- paths/export.sh@5 -- # export PATH 00:24:44.743 15:12:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.743 15:12:00 -- nvmf/common.sh@47 -- # : 0 00:24:44.743 15:12:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:44.743 15:12:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:44.743 15:12:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.743 15:12:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.743 15:12:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.743 15:12:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:44.743 15:12:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:44.743 15:12:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:44.743 15:12:00 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:44.743 15:12:00 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:44.743 15:12:00 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:44.743 15:12:00 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:44.743 15:12:00 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:44.743 15:12:00 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:44.743 15:12:00 -- host/discovery.sh@25 -- # nvmftestinit 00:24:44.743 15:12:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:44.743 15:12:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.743 15:12:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:44.743 15:12:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:44.743 15:12:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:44.743 15:12:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.743 15:12:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.743 15:12:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.743 15:12:00 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:44.743 15:12:00 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:44.743 15:12:00 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:44.743 15:12:00 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:44.743 15:12:00 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:44.743 15:12:00 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:44.743 15:12:00 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.743 15:12:00 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.743 15:12:00 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:44.743 15:12:00 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:44.743 15:12:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:44.743 15:12:00 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:44.743 15:12:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:44.743 15:12:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.743 15:12:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:44.743 15:12:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:44.743 15:12:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:44.743 15:12:00 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:44.743 15:12:00 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:44.743 15:12:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:44.743 Cannot find device "nvmf_tgt_br" 00:24:44.743 15:12:00 -- nvmf/common.sh@155 -- # true 00:24:44.743 15:12:00 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:44.743 Cannot find device "nvmf_tgt_br2" 00:24:44.743 15:12:00 -- nvmf/common.sh@156 -- # true 00:24:44.743 15:12:00 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:44.743 15:12:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:44.743 Cannot find device "nvmf_tgt_br" 00:24:44.743 15:12:00 -- nvmf/common.sh@158 -- # true 00:24:44.743 15:12:00 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:44.743 Cannot find device "nvmf_tgt_br2" 00:24:44.743 15:12:00 -- nvmf/common.sh@159 -- # true 00:24:44.743 15:12:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:44.743 15:12:00 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:44.743 15:12:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:44.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:44.743 15:12:00 -- nvmf/common.sh@162 -- # true 00:24:45.002 15:12:00 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:45.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:45.002 15:12:00 -- nvmf/common.sh@163 -- # true 00:24:45.002 15:12:00 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:45.002 15:12:00 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:45.002 15:12:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:45.002 15:12:00 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:45.002 15:12:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:45.002 15:12:00 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:45.002 15:12:00 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:45.002 15:12:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:45.002 15:12:00 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:45.002 15:12:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:45.002 15:12:00 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:45.002 15:12:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:45.002 15:12:00 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:45.002 15:12:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:45.002 15:12:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:45.002 15:12:00 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:45.002 15:12:00 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:45.002 15:12:00 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:45.002 15:12:00 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:45.002 15:12:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:45.002 15:12:00 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:45.002 15:12:00 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:45.002 15:12:00 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:45.002 15:12:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:45.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:24:45.002 00:24:45.002 --- 10.0.0.2 ping statistics --- 00:24:45.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.002 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:24:45.002 15:12:00 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:45.002 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:45.002 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:24:45.002 00:24:45.002 --- 10.0.0.3 ping statistics --- 00:24:45.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.002 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:24:45.002 15:12:00 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:45.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:24:45.002 00:24:45.002 --- 10.0.0.1 ping statistics --- 00:24:45.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.002 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:24:45.002 15:12:00 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.002 15:12:00 -- nvmf/common.sh@422 -- # return 0 00:24:45.002 15:12:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:45.002 15:12:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.002 15:12:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:45.002 15:12:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:45.002 15:12:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.002 15:12:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:45.002 15:12:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:45.260 15:12:00 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:45.260 15:12:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:45.260 15:12:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:45.260 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:24:45.260 15:12:00 -- nvmf/common.sh@470 -- # nvmfpid=82079 00:24:45.260 15:12:00 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:45.260 15:12:00 -- nvmf/common.sh@471 -- # waitforlisten 82079 00:24:45.260 15:12:00 -- common/autotest_common.sh@817 -- # '[' -z 82079 ']' 00:24:45.260 15:12:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.260 15:12:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:45.260 15:12:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.260 15:12:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:45.260 15:12:00 -- common/autotest_common.sh@10 -- # set +x 00:24:45.260 [2024-04-18 15:12:00.784119] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:24:45.260 [2024-04-18 15:12:00.784200] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.260 [2024-04-18 15:12:00.928509] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.518 [2024-04-18 15:12:01.020862] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.518 [2024-04-18 15:12:01.020913] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.518 [2024-04-18 15:12:01.020924] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.518 [2024-04-18 15:12:01.020932] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.518 [2024-04-18 15:12:01.020939] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.518 [2024-04-18 15:12:01.020979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.105 15:12:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:46.105 15:12:01 -- common/autotest_common.sh@850 -- # return 0 00:24:46.105 15:12:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:46.105 15:12:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:46.105 15:12:01 -- common/autotest_common.sh@10 -- # set +x 00:24:46.105 15:12:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.105 15:12:01 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:46.105 15:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.105 15:12:01 -- common/autotest_common.sh@10 -- # set +x 00:24:46.105 [2024-04-18 15:12:01.710722] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.105 15:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.105 15:12:01 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:46.105 15:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.105 15:12:01 -- common/autotest_common.sh@10 -- # set +x 00:24:46.105 [2024-04-18 15:12:01.722885] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:46.105 15:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.105 15:12:01 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:46.105 15:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.105 15:12:01 -- common/autotest_common.sh@10 -- # set +x 00:24:46.105 null0 00:24:46.105 15:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.105 15:12:01 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:46.105 15:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.106 15:12:01 -- common/autotest_common.sh@10 -- # set +x 00:24:46.106 null1 00:24:46.106 15:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.106 15:12:01 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:46.106 15:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.106 15:12:01 -- common/autotest_common.sh@10 -- # set +x 00:24:46.106 15:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.106 15:12:01 -- host/discovery.sh@45 -- # hostpid=82126 00:24:46.106 15:12:01 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:46.106 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:46.106 15:12:01 -- host/discovery.sh@46 -- # waitforlisten 82126 /tmp/host.sock 00:24:46.106 15:12:01 -- common/autotest_common.sh@817 -- # '[' -z 82126 ']' 00:24:46.106 15:12:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:24:46.106 15:12:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:46.106 15:12:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:46.106 15:12:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:46.106 15:12:01 -- common/autotest_common.sh@10 -- # set +x 00:24:46.364 [2024-04-18 15:12:01.820972] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:24:46.364 [2024-04-18 15:12:01.821051] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82126 ] 00:24:46.364 [2024-04-18 15:12:01.963592] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.364 [2024-04-18 15:12:02.051017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.300 15:12:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:47.300 15:12:02 -- common/autotest_common.sh@850 -- # return 0 00:24:47.300 15:12:02 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:47.301 15:12:02 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:47.301 15:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.301 15:12:02 -- common/autotest_common.sh@10 -- # set +x 00:24:47.301 15:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.301 15:12:02 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:47.301 15:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.301 15:12:02 -- common/autotest_common.sh@10 -- # set +x 00:24:47.301 15:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.301 15:12:02 -- host/discovery.sh@72 -- # notify_id=0 00:24:47.301 15:12:02 -- host/discovery.sh@83 -- # get_subsystem_names 00:24:47.301 15:12:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:47.301 15:12:02 -- host/discovery.sh@59 -- # xargs 00:24:47.301 15:12:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:47.301 15:12:02 -- host/discovery.sh@59 -- # sort 00:24:47.301 15:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.301 15:12:02 -- common/autotest_common.sh@10 -- # set +x 00:24:47.301 15:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.301 15:12:02 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:47.301 15:12:02 -- host/discovery.sh@84 -- # get_bdev_list 00:24:47.301 15:12:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.301 15:12:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:47.301 15:12:02 -- host/discovery.sh@55 -- # sort 00:24:47.301 15:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.301 15:12:02 -- common/autotest_common.sh@10 -- # set +x 00:24:47.301 15:12:02 -- host/discovery.sh@55 -- # xargs 00:24:47.301 15:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.301 15:12:02 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:47.301 15:12:02 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:47.301 15:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.301 15:12:02 -- common/autotest_common.sh@10 -- # set +x 00:24:47.301 15:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.301 15:12:02 -- host/discovery.sh@87 -- # get_subsystem_names 00:24:47.301 15:12:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:47.301 15:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.301 15:12:02 -- common/autotest_common.sh@10 -- # set +x 00:24:47.301 15:12:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:47.301 15:12:02 -- host/discovery.sh@59 -- # sort 00:24:47.301 15:12:02 -- host/discovery.sh@59 -- # xargs 00:24:47.301 15:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.301 15:12:02 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:47.301 15:12:02 -- host/discovery.sh@88 -- # get_bdev_list 00:24:47.301 15:12:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:47.301 15:12:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.301 15:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.301 15:12:02 -- host/discovery.sh@55 -- # sort 00:24:47.301 15:12:02 -- common/autotest_common.sh@10 -- # set +x 00:24:47.301 15:12:02 -- host/discovery.sh@55 -- # xargs 00:24:47.301 15:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.301 15:12:02 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:47.301 15:12:02 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:47.301 15:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.301 15:12:02 -- common/autotest_common.sh@10 -- # set +x 00:24:47.301 15:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.301 15:12:02 -- host/discovery.sh@91 -- # get_subsystem_names 00:24:47.301 15:12:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:47.301 15:12:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:47.301 15:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.301 15:12:02 -- common/autotest_common.sh@10 -- # set +x 00:24:47.301 15:12:02 -- host/discovery.sh@59 -- # sort 00:24:47.301 15:12:02 -- host/discovery.sh@59 -- # xargs 00:24:47.301 15:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.301 15:12:02 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:47.301 15:12:02 -- host/discovery.sh@92 -- # get_bdev_list 00:24:47.301 15:12:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.301 15:12:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:47.301 15:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.301 15:12:02 -- host/discovery.sh@55 -- # xargs 00:24:47.301 15:12:02 -- common/autotest_common.sh@10 -- # set +x 00:24:47.301 15:12:02 -- host/discovery.sh@55 -- # sort 00:24:47.301 15:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.560 15:12:03 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:47.560 15:12:03 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:47.560 15:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.560 15:12:03 -- common/autotest_common.sh@10 -- # set +x 00:24:47.560 [2024-04-18 15:12:03.033251] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.560 15:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.560 15:12:03 -- host/discovery.sh@97 -- # get_subsystem_names 00:24:47.560 15:12:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:47.560 15:12:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:47.560 15:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.560 15:12:03 -- host/discovery.sh@59 -- # sort 00:24:47.560 15:12:03 -- common/autotest_common.sh@10 -- # set +x 00:24:47.560 15:12:03 -- host/discovery.sh@59 -- # xargs 00:24:47.560 15:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.560 15:12:03 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:47.560 15:12:03 -- host/discovery.sh@98 -- # get_bdev_list 00:24:47.560 15:12:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.560 15:12:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:47.560 15:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.560 15:12:03 -- host/discovery.sh@55 -- # sort 00:24:47.560 15:12:03 -- common/autotest_common.sh@10 -- # set +x 00:24:47.560 15:12:03 -- host/discovery.sh@55 -- # xargs 00:24:47.560 15:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.560 15:12:03 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:47.560 15:12:03 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:47.560 15:12:03 -- host/discovery.sh@79 -- # expected_count=0 00:24:47.560 15:12:03 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:47.560 15:12:03 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:47.560 15:12:03 -- common/autotest_common.sh@901 -- # local max=10 00:24:47.561 15:12:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:47.561 15:12:03 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:47.561 15:12:03 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:47.561 15:12:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:47.561 15:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.561 15:12:03 -- host/discovery.sh@74 -- # jq '. | length' 00:24:47.561 15:12:03 -- common/autotest_common.sh@10 -- # set +x 00:24:47.561 15:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.561 15:12:03 -- host/discovery.sh@74 -- # notification_count=0 00:24:47.561 15:12:03 -- host/discovery.sh@75 -- # notify_id=0 00:24:47.561 15:12:03 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:47.561 15:12:03 -- common/autotest_common.sh@904 -- # return 0 00:24:47.561 15:12:03 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:47.561 15:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.561 15:12:03 -- common/autotest_common.sh@10 -- # set +x 00:24:47.561 15:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.561 15:12:03 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:47.561 15:12:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:47.561 15:12:03 -- common/autotest_common.sh@901 -- # local max=10 00:24:47.561 15:12:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:47.561 15:12:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:47.561 15:12:03 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:47.561 15:12:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:47.561 15:12:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:47.561 15:12:03 -- host/discovery.sh@59 -- # sort 00:24:47.561 15:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.561 15:12:03 -- host/discovery.sh@59 -- # xargs 00:24:47.561 15:12:03 -- common/autotest_common.sh@10 -- # set +x 00:24:47.561 15:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.561 15:12:03 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:24:47.561 15:12:03 -- common/autotest_common.sh@906 -- # sleep 1 00:24:48.128 [2024-04-18 15:12:03.710340] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:48.128 [2024-04-18 15:12:03.710385] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:48.128 [2024-04-18 15:12:03.710403] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:48.128 [2024-04-18 15:12:03.795812] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:48.386 [2024-04-18 15:12:03.851832] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:48.386 [2024-04-18 15:12:03.851879] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:48.645 15:12:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.645 15:12:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:48.645 15:12:04 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:48.645 15:12:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:48.645 15:12:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:48.645 15:12:04 -- host/discovery.sh@59 -- # sort 00:24:48.645 15:12:04 -- host/discovery.sh@59 -- # xargs 00:24:48.645 15:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.645 15:12:04 -- common/autotest_common.sh@10 -- # set +x 00:24:48.645 15:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.645 15:12:04 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.645 15:12:04 -- common/autotest_common.sh@904 -- # return 0 00:24:48.645 15:12:04 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:48.645 15:12:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:48.645 15:12:04 -- common/autotest_common.sh@901 -- # local max=10 00:24:48.645 15:12:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.645 15:12:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:48.645 15:12:04 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:48.645 15:12:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.645 15:12:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:48.645 15:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.645 15:12:04 -- host/discovery.sh@55 -- # sort 00:24:48.645 15:12:04 -- common/autotest_common.sh@10 -- # set +x 00:24:48.645 15:12:04 -- host/discovery.sh@55 -- # xargs 00:24:48.645 15:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.645 15:12:04 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:48.645 15:12:04 -- common/autotest_common.sh@904 -- # return 0 00:24:48.645 15:12:04 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:48.645 15:12:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:48.645 15:12:04 -- common/autotest_common.sh@901 -- # local max=10 00:24:48.645 15:12:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.645 15:12:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:48.904 15:12:04 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:24:48.904 15:12:04 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:48.904 15:12:04 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:48.904 15:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.904 15:12:04 -- host/discovery.sh@63 -- # xargs 00:24:48.904 15:12:04 -- host/discovery.sh@63 -- # sort -n 00:24:48.904 15:12:04 -- common/autotest_common.sh@10 -- # set +x 00:24:48.904 15:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.904 15:12:04 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:24:48.904 15:12:04 -- common/autotest_common.sh@904 -- # return 0 00:24:48.904 15:12:04 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:48.904 15:12:04 -- host/discovery.sh@79 -- # expected_count=1 00:24:48.904 15:12:04 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:48.904 15:12:04 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:48.904 15:12:04 -- common/autotest_common.sh@901 -- # local max=10 00:24:48.904 15:12:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.904 15:12:04 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:48.904 15:12:04 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:48.904 15:12:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:48.904 15:12:04 -- host/discovery.sh@74 -- # jq '. | length' 00:24:48.904 15:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.904 15:12:04 -- common/autotest_common.sh@10 -- # set +x 00:24:48.904 15:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.904 15:12:04 -- host/discovery.sh@74 -- # notification_count=1 00:24:48.904 15:12:04 -- host/discovery.sh@75 -- # notify_id=1 00:24:48.904 15:12:04 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:48.904 15:12:04 -- common/autotest_common.sh@904 -- # return 0 00:24:48.904 15:12:04 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:48.904 15:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.904 15:12:04 -- common/autotest_common.sh@10 -- # set +x 00:24:48.904 15:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.904 15:12:04 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:48.904 15:12:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:48.904 15:12:04 -- common/autotest_common.sh@901 -- # local max=10 00:24:48.904 15:12:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.904 15:12:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:48.904 15:12:04 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:48.904 15:12:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:48.904 15:12:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.904 15:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.904 15:12:04 -- common/autotest_common.sh@10 -- # set +x 00:24:48.904 15:12:04 -- host/discovery.sh@55 -- # sort 00:24:48.904 15:12:04 -- host/discovery.sh@55 -- # xargs 00:24:48.904 15:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.904 15:12:04 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:48.904 15:12:04 -- common/autotest_common.sh@904 -- # return 0 00:24:48.904 15:12:04 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:48.904 15:12:04 -- host/discovery.sh@79 -- # expected_count=1 00:24:48.904 15:12:04 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:48.904 15:12:04 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:48.904 15:12:04 -- common/autotest_common.sh@901 -- # local max=10 00:24:48.904 15:12:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.904 15:12:04 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:48.904 15:12:04 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:48.904 15:12:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:48.904 15:12:04 -- host/discovery.sh@74 -- # jq '. | length' 00:24:48.905 15:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.905 15:12:04 -- common/autotest_common.sh@10 -- # set +x 00:24:48.905 15:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.905 15:12:04 -- host/discovery.sh@74 -- # notification_count=1 00:24:48.905 15:12:04 -- host/discovery.sh@75 -- # notify_id=2 00:24:48.905 15:12:04 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:48.905 15:12:04 -- common/autotest_common.sh@904 -- # return 0 00:24:48.905 15:12:04 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:48.905 15:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.905 15:12:04 -- common/autotest_common.sh@10 -- # set +x 00:24:48.905 [2024-04-18 15:12:04.563364] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:48.905 [2024-04-18 15:12:04.563905] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:48.905 [2024-04-18 15:12:04.563956] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:48.905 15:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.905 15:12:04 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:48.905 15:12:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:48.905 15:12:04 -- common/autotest_common.sh@901 -- # local max=10 00:24:48.905 15:12:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.905 15:12:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:48.905 15:12:04 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:48.905 15:12:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:48.905 15:12:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:48.905 15:12:04 -- host/discovery.sh@59 -- # sort 00:24:48.905 15:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.905 15:12:04 -- host/discovery.sh@59 -- # xargs 00:24:48.905 15:12:04 -- common/autotest_common.sh@10 -- # set +x 00:24:48.905 15:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.905 15:12:04 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.905 15:12:04 -- common/autotest_common.sh@904 -- # return 0 00:24:48.905 15:12:04 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:48.905 15:12:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:48.905 15:12:04 -- common/autotest_common.sh@901 -- # local max=10 00:24:48.905 15:12:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.905 15:12:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:48.905 15:12:04 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:48.905 15:12:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.905 15:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.905 15:12:04 -- common/autotest_common.sh@10 -- # set +x 00:24:48.905 15:12:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:49.164 15:12:04 -- host/discovery.sh@55 -- # sort 00:24:49.164 15:12:04 -- host/discovery.sh@55 -- # xargs 00:24:49.164 15:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.164 [2024-04-18 15:12:04.649810] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:49.164 15:12:04 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:49.164 15:12:04 -- common/autotest_common.sh@904 -- # return 0 00:24:49.164 15:12:04 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:49.164 15:12:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:49.164 15:12:04 -- common/autotest_common.sh@901 -- # local max=10 00:24:49.164 15:12:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:49.164 15:12:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:49.164 15:12:04 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:24:49.164 15:12:04 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:49.164 15:12:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.164 15:12:04 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:49.164 15:12:04 -- common/autotest_common.sh@10 -- # set +x 00:24:49.164 15:12:04 -- host/discovery.sh@63 -- # sort -n 00:24:49.164 15:12:04 -- host/discovery.sh@63 -- # xargs 00:24:49.164 15:12:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.164 [2024-04-18 15:12:04.711055] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:49.164 [2024-04-18 15:12:04.711099] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:49.164 [2024-04-18 15:12:04.711106] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:49.164 15:12:04 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:49.164 15:12:04 -- common/autotest_common.sh@906 -- # sleep 1 00:24:50.100 15:12:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.100 15:12:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:50.100 15:12:05 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:24:50.100 15:12:05 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:50.100 15:12:05 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:50.100 15:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.100 15:12:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.100 15:12:05 -- host/discovery.sh@63 -- # xargs 00:24:50.100 15:12:05 -- host/discovery.sh@63 -- # sort -n 00:24:50.100 15:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.100 15:12:05 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:50.100 15:12:05 -- common/autotest_common.sh@904 -- # return 0 00:24:50.100 15:12:05 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:50.100 15:12:05 -- host/discovery.sh@79 -- # expected_count=0 00:24:50.100 15:12:05 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:50.100 15:12:05 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:50.100 15:12:05 -- common/autotest_common.sh@901 -- # local max=10 00:24:50.100 15:12:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.100 15:12:05 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:50.100 15:12:05 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:50.100 15:12:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:50.100 15:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.100 15:12:05 -- host/discovery.sh@74 -- # jq '. | length' 00:24:50.100 15:12:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.100 15:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.359 15:12:05 -- host/discovery.sh@74 -- # notification_count=0 00:24:50.359 15:12:05 -- host/discovery.sh@75 -- # notify_id=2 00:24:50.359 15:12:05 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:50.359 15:12:05 -- common/autotest_common.sh@904 -- # return 0 00:24:50.359 15:12:05 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:50.359 15:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.359 15:12:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.359 [2024-04-18 15:12:05.818613] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:50.359 [2024-04-18 15:12:05.818662] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:50.359 15:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.359 15:12:05 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:50.359 15:12:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:50.359 15:12:05 -- common/autotest_common.sh@901 -- # local max=10 00:24:50.359 15:12:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.359 15:12:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:50.359 [2024-04-18 15:12:05.826062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.359 [2024-04-18 15:12:05.826104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-04-18 15:12:05.826117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.359 [2024-04-18 15:12:05.826126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-04-18 15:12:05.826137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.359 [2024-04-18 15:12:05.826147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-04-18 15:12:05.826156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.359 [2024-04-18 15:12:05.826166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.359 [2024-04-18 15:12:05.826177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf27a10 is same with the state(5) to be set 00:24:50.359 15:12:05 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:50.359 15:12:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.359 15:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.359 15:12:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.359 15:12:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.359 15:12:05 -- host/discovery.sh@59 -- # xargs 00:24:50.359 15:12:05 -- host/discovery.sh@59 -- # sort 00:24:50.359 [2024-04-18 15:12:05.835995] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf27a10 (9): Bad file descriptor 00:24:50.359 [2024-04-18 15:12:05.846003] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:50.359 [2024-04-18 15:12:05.846174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.359 [2024-04-18 15:12:05.846215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.359 [2024-04-18 15:12:05.846228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf27a10 with addr=10.0.0.2, port=4420 00:24:50.359 [2024-04-18 15:12:05.846241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf27a10 is same with the state(5) to be set 00:24:50.359 [2024-04-18 15:12:05.846257] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf27a10 (9): Bad file descriptor 00:24:50.359 [2024-04-18 15:12:05.846272] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:50.359 [2024-04-18 15:12:05.846281] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:50.359 [2024-04-18 15:12:05.846292] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:50.359 [2024-04-18 15:12:05.846307] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.359 15:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.359 [2024-04-18 15:12:05.856052] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:50.360 [2024-04-18 15:12:05.856142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.360 [2024-04-18 15:12:05.856177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.360 [2024-04-18 15:12:05.856189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf27a10 with addr=10.0.0.2, port=4420 00:24:50.360 [2024-04-18 15:12:05.856200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf27a10 is same with the state(5) to be set 00:24:50.360 [2024-04-18 15:12:05.856213] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf27a10 (9): Bad file descriptor 00:24:50.360 [2024-04-18 15:12:05.856226] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:50.360 [2024-04-18 15:12:05.856235] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:50.360 [2024-04-18 15:12:05.856245] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:50.360 [2024-04-18 15:12:05.856257] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.360 [2024-04-18 15:12:05.866090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:50.360 [2024-04-18 15:12:05.866179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.360 [2024-04-18 15:12:05.866214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.360 [2024-04-18 15:12:05.866227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf27a10 with addr=10.0.0.2, port=4420 00:24:50.360 [2024-04-18 15:12:05.866238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf27a10 is same with the state(5) to be set 00:24:50.360 [2024-04-18 15:12:05.866253] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf27a10 (9): Bad file descriptor 00:24:50.360 [2024-04-18 15:12:05.866266] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:50.360 [2024-04-18 15:12:05.866275] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:50.360 [2024-04-18 15:12:05.866285] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:50.360 [2024-04-18 15:12:05.866297] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.360 [2024-04-18 15:12:05.876127] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:50.360 [2024-04-18 15:12:05.876189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.360 [2024-04-18 15:12:05.876223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.360 [2024-04-18 15:12:05.876236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf27a10 with addr=10.0.0.2, port=4420 00:24:50.360 [2024-04-18 15:12:05.876246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf27a10 is same with the state(5) to be set 00:24:50.360 [2024-04-18 15:12:05.876259] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf27a10 (9): Bad file descriptor 00:24:50.360 [2024-04-18 15:12:05.876271] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:50.360 [2024-04-18 15:12:05.876280] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:50.360 [2024-04-18 15:12:05.876289] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:50.360 [2024-04-18 15:12:05.876301] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.360 15:12:05 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.360 15:12:05 -- common/autotest_common.sh@904 -- # return 0 00:24:50.360 15:12:05 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:50.360 15:12:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:50.360 15:12:05 -- common/autotest_common.sh@901 -- # local max=10 00:24:50.360 15:12:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.360 15:12:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:50.360 15:12:05 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:50.360 15:12:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.360 15:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.360 15:12:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.360 15:12:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.360 15:12:05 -- host/discovery.sh@55 -- # sort 00:24:50.360 15:12:05 -- host/discovery.sh@55 -- # xargs 00:24:50.360 [2024-04-18 15:12:05.886150] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:50.360 [2024-04-18 15:12:05.886222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.360 [2024-04-18 15:12:05.886253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.360 [2024-04-18 15:12:05.886265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf27a10 with addr=10.0.0.2, port=4420 00:24:50.360 [2024-04-18 15:12:05.886276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf27a10 is same with the state(5) to be set 00:24:50.360 [2024-04-18 15:12:05.886289] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf27a10 (9): Bad file descriptor 00:24:50.360 [2024-04-18 15:12:05.886302] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:50.360 [2024-04-18 15:12:05.886310] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:50.360 [2024-04-18 15:12:05.886319] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:50.360 [2024-04-18 15:12:05.886331] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.360 [2024-04-18 15:12:05.896180] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:50.360 [2024-04-18 15:12:05.896267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.360 [2024-04-18 15:12:05.896303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.360 [2024-04-18 15:12:05.896315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf27a10 with addr=10.0.0.2, port=4420 00:24:50.360 [2024-04-18 15:12:05.896325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf27a10 is same with the state(5) to be set 00:24:50.360 [2024-04-18 15:12:05.896339] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf27a10 (9): Bad file descriptor 00:24:50.360 [2024-04-18 15:12:05.896352] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:50.360 [2024-04-18 15:12:05.896361] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:50.360 [2024-04-18 15:12:05.896370] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:50.360 [2024-04-18 15:12:05.896382] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.360 [2024-04-18 15:12:05.906195] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:50.360 [2024-04-18 15:12:05.906229] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:50.360 15:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.360 15:12:05 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:50.360 15:12:05 -- common/autotest_common.sh@904 -- # return 0 00:24:50.360 15:12:05 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:50.360 15:12:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:50.360 15:12:05 -- common/autotest_common.sh@901 -- # local max=10 00:24:50.360 15:12:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.360 15:12:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:50.360 15:12:05 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:24:50.360 15:12:05 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:50.360 15:12:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.360 15:12:05 -- common/autotest_common.sh@10 -- # set +x 00:24:50.360 15:12:05 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:50.360 15:12:05 -- host/discovery.sh@63 -- # sort -n 00:24:50.360 15:12:05 -- host/discovery.sh@63 -- # xargs 00:24:50.360 15:12:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.360 15:12:05 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:24:50.360 15:12:05 -- common/autotest_common.sh@904 -- # return 0 00:24:50.360 15:12:05 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:50.360 15:12:05 -- host/discovery.sh@79 -- # expected_count=0 00:24:50.360 15:12:05 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:50.360 15:12:05 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:50.360 15:12:05 -- common/autotest_common.sh@901 -- # local max=10 00:24:50.360 15:12:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.360 15:12:05 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:50.360 15:12:05 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:50.360 15:12:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:50.360 15:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.360 15:12:06 -- common/autotest_common.sh@10 -- # set +x 00:24:50.360 15:12:06 -- host/discovery.sh@74 -- # jq '. | length' 00:24:50.360 15:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.360 15:12:06 -- host/discovery.sh@74 -- # notification_count=0 00:24:50.360 15:12:06 -- host/discovery.sh@75 -- # notify_id=2 00:24:50.360 15:12:06 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:50.360 15:12:06 -- common/autotest_common.sh@904 -- # return 0 00:24:50.360 15:12:06 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:50.360 15:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.360 15:12:06 -- common/autotest_common.sh@10 -- # set +x 00:24:50.360 15:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.360 15:12:06 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:50.360 15:12:06 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:50.360 15:12:06 -- common/autotest_common.sh@901 -- # local max=10 00:24:50.360 15:12:06 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.360 15:12:06 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:50.619 15:12:06 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:50.619 15:12:06 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.619 15:12:06 -- host/discovery.sh@59 -- # sort 00:24:50.619 15:12:06 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.619 15:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.619 15:12:06 -- common/autotest_common.sh@10 -- # set +x 00:24:50.619 15:12:06 -- host/discovery.sh@59 -- # xargs 00:24:50.619 15:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.619 15:12:06 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:24:50.619 15:12:06 -- common/autotest_common.sh@904 -- # return 0 00:24:50.619 15:12:06 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:50.619 15:12:06 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:50.619 15:12:06 -- common/autotest_common.sh@901 -- # local max=10 00:24:50.619 15:12:06 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.619 15:12:06 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:50.619 15:12:06 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:50.619 15:12:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.619 15:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.619 15:12:06 -- common/autotest_common.sh@10 -- # set +x 00:24:50.619 15:12:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.619 15:12:06 -- host/discovery.sh@55 -- # sort 00:24:50.619 15:12:06 -- host/discovery.sh@55 -- # xargs 00:24:50.619 15:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.619 15:12:06 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:24:50.619 15:12:06 -- common/autotest_common.sh@904 -- # return 0 00:24:50.619 15:12:06 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:50.619 15:12:06 -- host/discovery.sh@79 -- # expected_count=2 00:24:50.619 15:12:06 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:50.619 15:12:06 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:50.619 15:12:06 -- common/autotest_common.sh@901 -- # local max=10 00:24:50.619 15:12:06 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.619 15:12:06 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:50.619 15:12:06 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:50.619 15:12:06 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:50.619 15:12:06 -- host/discovery.sh@74 -- # jq '. | length' 00:24:50.619 15:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.619 15:12:06 -- common/autotest_common.sh@10 -- # set +x 00:24:50.619 15:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.619 15:12:06 -- host/discovery.sh@74 -- # notification_count=2 00:24:50.619 15:12:06 -- host/discovery.sh@75 -- # notify_id=4 00:24:50.619 15:12:06 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:50.619 15:12:06 -- common/autotest_common.sh@904 -- # return 0 00:24:50.619 15:12:06 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:50.619 15:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.619 15:12:06 -- common/autotest_common.sh@10 -- # set +x 00:24:51.600 [2024-04-18 15:12:07.235020] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:51.600 [2024-04-18 15:12:07.235054] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:51.600 [2024-04-18 15:12:07.235070] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:51.859 [2024-04-18 15:12:07.320986] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:51.859 [2024-04-18 15:12:07.380245] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:51.859 [2024-04-18 15:12:07.380306] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:51.859 15:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.859 15:12:07 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:51.860 15:12:07 -- common/autotest_common.sh@638 -- # local es=0 00:24:51.860 15:12:07 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:51.860 15:12:07 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:51.860 15:12:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:51.860 15:12:07 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:51.860 15:12:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:51.860 15:12:07 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:51.860 15:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.860 15:12:07 -- common/autotest_common.sh@10 -- # set +x 00:24:51.860 2024/04/18 15:12:07 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:24:51.860 request: 00:24:51.860 { 00:24:51.860 "method": "bdev_nvme_start_discovery", 00:24:51.860 "params": { 00:24:51.860 "name": "nvme", 00:24:51.860 "trtype": "tcp", 00:24:51.860 "traddr": "10.0.0.2", 00:24:51.860 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:51.860 "adrfam": "ipv4", 00:24:51.860 "trsvcid": "8009", 00:24:51.860 "wait_for_attach": true 00:24:51.860 } 00:24:51.860 } 00:24:51.860 Got JSON-RPC error response 00:24:51.860 GoRPCClient: error on JSON-RPC call 00:24:51.860 15:12:07 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:51.860 15:12:07 -- common/autotest_common.sh@641 -- # es=1 00:24:51.860 15:12:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:51.860 15:12:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:51.860 15:12:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:51.860 15:12:07 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:51.860 15:12:07 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:51.860 15:12:07 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:51.860 15:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.860 15:12:07 -- common/autotest_common.sh@10 -- # set +x 00:24:51.860 15:12:07 -- host/discovery.sh@67 -- # sort 00:24:51.860 15:12:07 -- host/discovery.sh@67 -- # xargs 00:24:51.860 15:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.860 15:12:07 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:51.860 15:12:07 -- host/discovery.sh@146 -- # get_bdev_list 00:24:51.860 15:12:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.860 15:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.860 15:12:07 -- common/autotest_common.sh@10 -- # set +x 00:24:51.860 15:12:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:51.860 15:12:07 -- host/discovery.sh@55 -- # sort 00:24:51.860 15:12:07 -- host/discovery.sh@55 -- # xargs 00:24:51.860 15:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.860 15:12:07 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:51.860 15:12:07 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:51.860 15:12:07 -- common/autotest_common.sh@638 -- # local es=0 00:24:51.860 15:12:07 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:51.860 15:12:07 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:51.860 15:12:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:51.860 15:12:07 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:51.860 15:12:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:51.860 15:12:07 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:51.860 15:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.860 15:12:07 -- common/autotest_common.sh@10 -- # set +x 00:24:51.860 2024/04/18 15:12:07 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:24:51.860 request: 00:24:51.860 { 00:24:51.860 "method": "bdev_nvme_start_discovery", 00:24:51.860 "params": { 00:24:51.860 "name": "nvme_second", 00:24:51.860 "trtype": "tcp", 00:24:51.860 "traddr": "10.0.0.2", 00:24:51.860 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:51.860 "adrfam": "ipv4", 00:24:51.860 "trsvcid": "8009", 00:24:51.860 "wait_for_attach": true 00:24:51.860 } 00:24:51.860 } 00:24:51.860 Got JSON-RPC error response 00:24:51.860 GoRPCClient: error on JSON-RPC call 00:24:51.860 15:12:07 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:51.860 15:12:07 -- common/autotest_common.sh@641 -- # es=1 00:24:51.860 15:12:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:51.860 15:12:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:51.860 15:12:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:51.860 15:12:07 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:51.860 15:12:07 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:51.860 15:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.860 15:12:07 -- common/autotest_common.sh@10 -- # set +x 00:24:51.860 15:12:07 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:51.860 15:12:07 -- host/discovery.sh@67 -- # sort 00:24:51.860 15:12:07 -- host/discovery.sh@67 -- # xargs 00:24:51.860 15:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.118 15:12:07 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:52.118 15:12:07 -- host/discovery.sh@152 -- # get_bdev_list 00:24:52.118 15:12:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.118 15:12:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:52.118 15:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.118 15:12:07 -- common/autotest_common.sh@10 -- # set +x 00:24:52.118 15:12:07 -- host/discovery.sh@55 -- # sort 00:24:52.118 15:12:07 -- host/discovery.sh@55 -- # xargs 00:24:52.118 15:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:52.118 15:12:07 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:52.118 15:12:07 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:52.118 15:12:07 -- common/autotest_common.sh@638 -- # local es=0 00:24:52.118 15:12:07 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:52.118 15:12:07 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:52.118 15:12:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:52.118 15:12:07 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:52.118 15:12:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:52.118 15:12:07 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:52.118 15:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:52.118 15:12:07 -- common/autotest_common.sh@10 -- # set +x 00:24:53.055 [2024-04-18 15:12:08.656197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.055 [2024-04-18 15:12:08.656284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.055 [2024-04-18 15:12:08.656299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf800b0 with addr=10.0.0.2, port=8010 00:24:53.055 [2024-04-18 15:12:08.656321] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:53.055 [2024-04-18 15:12:08.656332] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:53.055 [2024-04-18 15:12:08.656342] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:53.990 [2024-04-18 15:12:09.654620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.990 [2024-04-18 15:12:09.654744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.990 [2024-04-18 15:12:09.654760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf800b0 with addr=10.0.0.2, port=8010 00:24:53.990 [2024-04-18 15:12:09.654784] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:53.990 [2024-04-18 15:12:09.654806] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:53.990 [2024-04-18 15:12:09.654817] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:55.366 [2024-04-18 15:12:10.652823] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:55.366 2024/04/18 15:12:10 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:24:55.366 request: 00:24:55.366 { 00:24:55.366 "method": "bdev_nvme_start_discovery", 00:24:55.366 "params": { 00:24:55.366 "name": "nvme_second", 00:24:55.366 "trtype": "tcp", 00:24:55.366 "traddr": "10.0.0.2", 00:24:55.366 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:55.366 "adrfam": "ipv4", 00:24:55.366 "trsvcid": "8010", 00:24:55.366 "attach_timeout_ms": 3000 00:24:55.366 } 00:24:55.366 } 00:24:55.366 Got JSON-RPC error response 00:24:55.366 GoRPCClient: error on JSON-RPC call 00:24:55.366 15:12:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:55.366 15:12:10 -- common/autotest_common.sh@641 -- # es=1 00:24:55.366 15:12:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:55.366 15:12:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:55.366 15:12:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:55.366 15:12:10 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:55.366 15:12:10 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:55.366 15:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:55.366 15:12:10 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:55.366 15:12:10 -- common/autotest_common.sh@10 -- # set +x 00:24:55.366 15:12:10 -- host/discovery.sh@67 -- # sort 00:24:55.366 15:12:10 -- host/discovery.sh@67 -- # xargs 00:24:55.366 15:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.366 15:12:10 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:55.366 15:12:10 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:55.366 15:12:10 -- host/discovery.sh@161 -- # kill 82126 00:24:55.366 15:12:10 -- host/discovery.sh@162 -- # nvmftestfini 00:24:55.366 15:12:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:55.366 15:12:10 -- nvmf/common.sh@117 -- # sync 00:24:55.366 15:12:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:55.366 15:12:10 -- nvmf/common.sh@120 -- # set +e 00:24:55.366 15:12:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:55.366 15:12:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:55.366 rmmod nvme_tcp 00:24:55.366 rmmod nvme_fabrics 00:24:55.366 rmmod nvme_keyring 00:24:55.366 15:12:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:55.366 15:12:10 -- nvmf/common.sh@124 -- # set -e 00:24:55.366 15:12:10 -- nvmf/common.sh@125 -- # return 0 00:24:55.367 15:12:10 -- nvmf/common.sh@478 -- # '[' -n 82079 ']' 00:24:55.367 15:12:10 -- nvmf/common.sh@479 -- # killprocess 82079 00:24:55.367 15:12:10 -- common/autotest_common.sh@936 -- # '[' -z 82079 ']' 00:24:55.367 15:12:10 -- common/autotest_common.sh@940 -- # kill -0 82079 00:24:55.367 15:12:10 -- common/autotest_common.sh@941 -- # uname 00:24:55.367 15:12:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:55.367 15:12:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82079 00:24:55.367 15:12:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:55.367 killing process with pid 82079 00:24:55.367 15:12:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:55.367 15:12:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82079' 00:24:55.367 15:12:10 -- common/autotest_common.sh@955 -- # kill 82079 00:24:55.367 15:12:10 -- common/autotest_common.sh@960 -- # wait 82079 00:24:55.625 15:12:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:55.625 15:12:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:55.625 15:12:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:55.625 15:12:11 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:55.625 15:12:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:55.625 15:12:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.625 15:12:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.625 15:12:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.625 15:12:11 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:55.625 00:24:55.625 real 0m11.066s 00:24:55.625 user 0m20.905s 00:24:55.625 sys 0m2.277s 00:24:55.625 15:12:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:55.625 15:12:11 -- common/autotest_common.sh@10 -- # set +x 00:24:55.625 ************************************ 00:24:55.625 END TEST nvmf_discovery 00:24:55.625 ************************************ 00:24:55.625 15:12:11 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:55.625 15:12:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:55.625 15:12:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:55.625 15:12:11 -- common/autotest_common.sh@10 -- # set +x 00:24:55.625 ************************************ 00:24:55.625 START TEST nvmf_discovery_remove_ifc 00:24:55.625 ************************************ 00:24:55.625 15:12:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:55.884 * Looking for test storage... 00:24:55.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:55.885 15:12:11 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:55.885 15:12:11 -- nvmf/common.sh@7 -- # uname -s 00:24:55.885 15:12:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.885 15:12:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.885 15:12:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.885 15:12:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.885 15:12:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.885 15:12:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.885 15:12:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.885 15:12:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.885 15:12:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.885 15:12:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.885 15:12:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:24:55.885 15:12:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:24:55.885 15:12:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.885 15:12:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.885 15:12:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:55.885 15:12:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.885 15:12:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:55.885 15:12:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.885 15:12:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.885 15:12:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.885 15:12:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.885 15:12:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.885 15:12:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.885 15:12:11 -- paths/export.sh@5 -- # export PATH 00:24:55.885 15:12:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.885 15:12:11 -- nvmf/common.sh@47 -- # : 0 00:24:55.885 15:12:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:55.885 15:12:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:55.885 15:12:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.885 15:12:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.885 15:12:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.885 15:12:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:55.885 15:12:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:55.885 15:12:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:55.885 15:12:11 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:55.885 15:12:11 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:55.885 15:12:11 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:55.885 15:12:11 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:55.885 15:12:11 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:55.885 15:12:11 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:55.885 15:12:11 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:55.885 15:12:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:55.885 15:12:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.885 15:12:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:55.885 15:12:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:55.885 15:12:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:55.885 15:12:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.885 15:12:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.885 15:12:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.885 15:12:11 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:55.885 15:12:11 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:55.885 15:12:11 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:55.885 15:12:11 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:55.885 15:12:11 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:55.885 15:12:11 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:55.885 15:12:11 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.885 15:12:11 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.885 15:12:11 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:55.885 15:12:11 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:55.885 15:12:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:55.885 15:12:11 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:55.885 15:12:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:55.885 15:12:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.885 15:12:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:55.885 15:12:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:55.885 15:12:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:55.885 15:12:11 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:55.885 15:12:11 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:55.885 15:12:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:55.885 Cannot find device "nvmf_tgt_br" 00:24:55.885 15:12:11 -- nvmf/common.sh@155 -- # true 00:24:55.885 15:12:11 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:55.885 Cannot find device "nvmf_tgt_br2" 00:24:55.885 15:12:11 -- nvmf/common.sh@156 -- # true 00:24:55.885 15:12:11 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:55.885 15:12:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:55.885 Cannot find device "nvmf_tgt_br" 00:24:55.885 15:12:11 -- nvmf/common.sh@158 -- # true 00:24:55.885 15:12:11 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:55.885 Cannot find device "nvmf_tgt_br2" 00:24:55.885 15:12:11 -- nvmf/common.sh@159 -- # true 00:24:55.885 15:12:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:56.144 15:12:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:56.144 15:12:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:56.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:56.145 15:12:11 -- nvmf/common.sh@162 -- # true 00:24:56.145 15:12:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:56.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:56.145 15:12:11 -- nvmf/common.sh@163 -- # true 00:24:56.145 15:12:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:56.145 15:12:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:56.145 15:12:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:56.145 15:12:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:56.145 15:12:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:56.145 15:12:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:56.145 15:12:11 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:56.145 15:12:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:56.145 15:12:11 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:56.145 15:12:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:56.145 15:12:11 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:56.145 15:12:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:56.145 15:12:11 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:56.145 15:12:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:56.145 15:12:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:56.145 15:12:11 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:56.145 15:12:11 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:56.145 15:12:11 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:56.145 15:12:11 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:56.145 15:12:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:56.145 15:12:11 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:56.145 15:12:11 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:56.145 15:12:11 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:56.145 15:12:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:56.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:24:56.145 00:24:56.145 --- 10.0.0.2 ping statistics --- 00:24:56.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.145 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:24:56.145 15:12:11 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:56.145 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:56.145 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:24:56.145 00:24:56.145 --- 10.0.0.3 ping statistics --- 00:24:56.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.145 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:24:56.145 15:12:11 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:56.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:24:56.145 00:24:56.145 --- 10.0.0.1 ping statistics --- 00:24:56.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.145 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:24:56.145 15:12:11 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.145 15:12:11 -- nvmf/common.sh@422 -- # return 0 00:24:56.145 15:12:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:56.404 15:12:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.404 15:12:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:56.404 15:12:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:56.404 15:12:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.404 15:12:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:56.404 15:12:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:56.404 15:12:11 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:56.404 15:12:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:56.404 15:12:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:56.404 15:12:11 -- common/autotest_common.sh@10 -- # set +x 00:24:56.404 15:12:11 -- nvmf/common.sh@470 -- # nvmfpid=82617 00:24:56.404 15:12:11 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:56.404 15:12:11 -- nvmf/common.sh@471 -- # waitforlisten 82617 00:24:56.404 15:12:11 -- common/autotest_common.sh@817 -- # '[' -z 82617 ']' 00:24:56.404 15:12:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.404 15:12:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:56.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.404 15:12:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.404 15:12:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:56.404 15:12:11 -- common/autotest_common.sh@10 -- # set +x 00:24:56.404 [2024-04-18 15:12:11.944381] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:24:56.404 [2024-04-18 15:12:11.944467] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.404 [2024-04-18 15:12:12.084666] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.663 [2024-04-18 15:12:12.168752] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.663 [2024-04-18 15:12:12.168812] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.663 [2024-04-18 15:12:12.168822] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.663 [2024-04-18 15:12:12.168831] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.663 [2024-04-18 15:12:12.168838] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.663 [2024-04-18 15:12:12.168877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.231 15:12:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:57.231 15:12:12 -- common/autotest_common.sh@850 -- # return 0 00:24:57.231 15:12:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:57.231 15:12:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:57.231 15:12:12 -- common/autotest_common.sh@10 -- # set +x 00:24:57.231 15:12:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.231 15:12:12 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:57.231 15:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.231 15:12:12 -- common/autotest_common.sh@10 -- # set +x 00:24:57.231 [2024-04-18 15:12:12.867512] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.231 [2024-04-18 15:12:12.875634] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:57.231 null0 00:24:57.231 [2024-04-18 15:12:12.907505] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.231 15:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.231 15:12:12 -- host/discovery_remove_ifc.sh@59 -- # hostpid=82667 00:24:57.231 15:12:12 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:57.231 15:12:12 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 82667 /tmp/host.sock 00:24:57.231 15:12:12 -- common/autotest_common.sh@817 -- # '[' -z 82667 ']' 00:24:57.231 15:12:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:24:57.231 15:12:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:57.231 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:57.231 15:12:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:57.231 15:12:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:57.231 15:12:12 -- common/autotest_common.sh@10 -- # set +x 00:24:57.490 [2024-04-18 15:12:12.983315] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:24:57.490 [2024-04-18 15:12:12.983406] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82667 ] 00:24:57.490 [2024-04-18 15:12:13.125728] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.748 [2024-04-18 15:12:13.214320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.312 15:12:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:58.312 15:12:13 -- common/autotest_common.sh@850 -- # return 0 00:24:58.312 15:12:13 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:58.312 15:12:13 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:58.312 15:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.312 15:12:13 -- common/autotest_common.sh@10 -- # set +x 00:24:58.312 15:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.312 15:12:13 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:58.312 15:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.312 15:12:13 -- common/autotest_common.sh@10 -- # set +x 00:24:58.312 15:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.312 15:12:13 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:58.312 15:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.312 15:12:13 -- common/autotest_common.sh@10 -- # set +x 00:24:59.685 [2024-04-18 15:12:14.977001] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:59.685 [2024-04-18 15:12:14.977076] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:59.685 [2024-04-18 15:12:14.977092] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:59.685 [2024-04-18 15:12:15.063000] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:59.685 [2024-04-18 15:12:15.119468] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:59.685 [2024-04-18 15:12:15.119558] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:59.685 [2024-04-18 15:12:15.119588] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:59.685 [2024-04-18 15:12:15.119606] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:59.685 [2024-04-18 15:12:15.119635] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:59.685 15:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:59.685 [2024-04-18 15:12:15.125254] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1dd8930 was disconnected and freed. delete nvme_qpair. 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:59.685 15:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:59.685 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:24:59.685 15:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:59.685 15:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.685 15:12:15 -- common/autotest_common.sh@10 -- # set +x 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:59.685 15:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:59.685 15:12:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:00.621 15:12:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:00.621 15:12:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.621 15:12:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.621 15:12:16 -- common/autotest_common.sh@10 -- # set +x 00:25:00.621 15:12:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:00.621 15:12:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:00.621 15:12:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:00.621 15:12:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.621 15:12:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:00.621 15:12:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:02.009 15:12:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:02.009 15:12:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.009 15:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.009 15:12:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:02.009 15:12:17 -- common/autotest_common.sh@10 -- # set +x 00:25:02.009 15:12:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:02.009 15:12:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:02.009 15:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:02.009 15:12:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:02.009 15:12:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:03.001 15:12:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:03.001 15:12:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.001 15:12:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:03.001 15:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.001 15:12:18 -- common/autotest_common.sh@10 -- # set +x 00:25:03.001 15:12:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:03.001 15:12:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:03.001 15:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.001 15:12:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:03.001 15:12:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:03.936 15:12:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:03.936 15:12:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.936 15:12:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:03.936 15:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.936 15:12:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:03.936 15:12:19 -- common/autotest_common.sh@10 -- # set +x 00:25:03.936 15:12:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:03.936 15:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.936 15:12:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:03.936 15:12:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:04.870 15:12:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:04.870 15:12:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.870 15:12:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:04.871 15:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.871 15:12:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:04.871 15:12:20 -- common/autotest_common.sh@10 -- # set +x 00:25:04.871 15:12:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:04.871 15:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.871 [2024-04-18 15:12:20.537993] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:04.871 [2024-04-18 15:12:20.538056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.871 [2024-04-18 15:12:20.538072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.871 [2024-04-18 15:12:20.538087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.871 [2024-04-18 15:12:20.538096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.871 [2024-04-18 15:12:20.538106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.871 [2024-04-18 15:12:20.538116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.871 [2024-04-18 15:12:20.538126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.871 [2024-04-18 15:12:20.538137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.871 [2024-04-18 15:12:20.538147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.871 [2024-04-18 15:12:20.538155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.871 [2024-04-18 15:12:20.538165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4a5f0 is same with the state(5) to be set 00:25:04.871 15:12:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:04.871 15:12:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:04.871 [2024-04-18 15:12:20.547970] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4a5f0 (9): Bad file descriptor 00:25:04.871 [2024-04-18 15:12:20.557977] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:06.250 15:12:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:06.250 15:12:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.250 15:12:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.250 15:12:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:06.250 15:12:21 -- common/autotest_common.sh@10 -- # set +x 00:25:06.250 15:12:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:06.250 15:12:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:06.250 [2024-04-18 15:12:21.621633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:07.186 [2024-04-18 15:12:22.645627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:07.186 [2024-04-18 15:12:22.645771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d4a5f0 with addr=10.0.0.2, port=4420 00:25:07.186 [2024-04-18 15:12:22.645845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4a5f0 is same with the state(5) to be set 00:25:07.186 [2024-04-18 15:12:22.646946] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4a5f0 (9): Bad file descriptor 00:25:07.186 [2024-04-18 15:12:22.647037] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:07.186 [2024-04-18 15:12:22.647100] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:07.186 [2024-04-18 15:12:22.647179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.186 [2024-04-18 15:12:22.647215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.186 [2024-04-18 15:12:22.647251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.186 [2024-04-18 15:12:22.647281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.186 [2024-04-18 15:12:22.647312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.186 [2024-04-18 15:12:22.647342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.186 [2024-04-18 15:12:22.647372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.186 [2024-04-18 15:12:22.647401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.186 [2024-04-18 15:12:22.647433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.186 [2024-04-18 15:12:22.647461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.186 [2024-04-18 15:12:22.647490] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:07.186 [2024-04-18 15:12:22.647528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d49470 (9): Bad file descriptor 00:25:07.186 [2024-04-18 15:12:22.648075] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:07.186 [2024-04-18 15:12:22.648123] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:07.186 15:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:07.186 15:12:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:07.186 15:12:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:08.123 15:12:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:08.123 15:12:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.123 15:12:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.123 15:12:23 -- common/autotest_common.sh@10 -- # set +x 00:25:08.123 15:12:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:08.123 15:12:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:08.123 15:12:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:08.123 15:12:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.123 15:12:23 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:08.123 15:12:23 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:08.123 15:12:23 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:08.123 15:12:23 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:08.123 15:12:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:08.123 15:12:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.123 15:12:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:08.123 15:12:23 -- common/autotest_common.sh@10 -- # set +x 00:25:08.123 15:12:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:08.123 15:12:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:08.123 15:12:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:08.123 15:12:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.382 15:12:23 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:08.382 15:12:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:08.949 [2024-04-18 15:12:24.653650] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:08.949 [2024-04-18 15:12:24.653698] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:08.949 [2024-04-18 15:12:24.653715] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:09.207 [2024-04-18 15:12:24.739636] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:09.207 [2024-04-18 15:12:24.794686] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:09.207 [2024-04-18 15:12:24.794733] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:09.207 [2024-04-18 15:12:24.794755] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:09.207 [2024-04-18 15:12:24.794772] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:09.207 [2024-04-18 15:12:24.794781] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:09.207 [2024-04-18 15:12:24.802053] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1db01f0 was disconnected and freed. delete nvme_qpair. 00:25:09.207 15:12:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:09.207 15:12:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.207 15:12:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:09.207 15:12:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:09.207 15:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.207 15:12:24 -- common/autotest_common.sh@10 -- # set +x 00:25:09.207 15:12:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:09.207 15:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.207 15:12:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:09.207 15:12:24 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:09.207 15:12:24 -- host/discovery_remove_ifc.sh@90 -- # killprocess 82667 00:25:09.207 15:12:24 -- common/autotest_common.sh@936 -- # '[' -z 82667 ']' 00:25:09.207 15:12:24 -- common/autotest_common.sh@940 -- # kill -0 82667 00:25:09.208 15:12:24 -- common/autotest_common.sh@941 -- # uname 00:25:09.208 15:12:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:09.208 15:12:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82667 00:25:09.467 killing process with pid 82667 00:25:09.467 15:12:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:09.467 15:12:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:09.467 15:12:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82667' 00:25:09.467 15:12:24 -- common/autotest_common.sh@955 -- # kill 82667 00:25:09.467 15:12:24 -- common/autotest_common.sh@960 -- # wait 82667 00:25:09.467 15:12:25 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:09.467 15:12:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:09.467 15:12:25 -- nvmf/common.sh@117 -- # sync 00:25:09.726 15:12:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:09.726 15:12:25 -- nvmf/common.sh@120 -- # set +e 00:25:09.726 15:12:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:09.726 15:12:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:09.726 rmmod nvme_tcp 00:25:09.726 rmmod nvme_fabrics 00:25:09.726 rmmod nvme_keyring 00:25:09.726 15:12:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:09.726 15:12:25 -- nvmf/common.sh@124 -- # set -e 00:25:09.726 15:12:25 -- nvmf/common.sh@125 -- # return 0 00:25:09.726 15:12:25 -- nvmf/common.sh@478 -- # '[' -n 82617 ']' 00:25:09.726 15:12:25 -- nvmf/common.sh@479 -- # killprocess 82617 00:25:09.726 15:12:25 -- common/autotest_common.sh@936 -- # '[' -z 82617 ']' 00:25:09.726 15:12:25 -- common/autotest_common.sh@940 -- # kill -0 82617 00:25:09.726 15:12:25 -- common/autotest_common.sh@941 -- # uname 00:25:09.726 15:12:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:09.726 15:12:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82617 00:25:09.726 15:12:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:09.726 killing process with pid 82617 00:25:09.726 15:12:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:09.726 15:12:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82617' 00:25:09.726 15:12:25 -- common/autotest_common.sh@955 -- # kill 82617 00:25:09.726 15:12:25 -- common/autotest_common.sh@960 -- # wait 82617 00:25:09.987 15:12:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:09.987 15:12:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:09.987 15:12:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:09.987 15:12:25 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:09.987 15:12:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:09.987 15:12:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.987 15:12:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.987 15:12:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.987 15:12:25 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:09.987 00:25:09.987 real 0m14.295s 00:25:09.987 user 0m23.753s 00:25:09.987 sys 0m2.289s 00:25:09.987 15:12:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:09.987 15:12:25 -- common/autotest_common.sh@10 -- # set +x 00:25:09.987 ************************************ 00:25:09.987 END TEST nvmf_discovery_remove_ifc 00:25:09.987 ************************************ 00:25:09.987 15:12:25 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:09.987 15:12:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:09.987 15:12:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:09.987 15:12:25 -- common/autotest_common.sh@10 -- # set +x 00:25:10.247 ************************************ 00:25:10.247 START TEST nvmf_identify_kernel_target 00:25:10.247 ************************************ 00:25:10.247 15:12:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:10.247 * Looking for test storage... 00:25:10.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:10.247 15:12:25 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:10.247 15:12:25 -- nvmf/common.sh@7 -- # uname -s 00:25:10.247 15:12:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.247 15:12:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.247 15:12:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.247 15:12:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.247 15:12:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.247 15:12:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.247 15:12:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.247 15:12:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.247 15:12:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.247 15:12:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.247 15:12:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:25:10.247 15:12:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:25:10.508 15:12:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.508 15:12:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.508 15:12:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:10.508 15:12:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.508 15:12:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:10.508 15:12:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.508 15:12:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.508 15:12:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.508 15:12:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.508 15:12:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.508 15:12:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.508 15:12:25 -- paths/export.sh@5 -- # export PATH 00:25:10.508 15:12:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.508 15:12:25 -- nvmf/common.sh@47 -- # : 0 00:25:10.508 15:12:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:10.508 15:12:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:10.508 15:12:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.508 15:12:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.508 15:12:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.508 15:12:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:10.508 15:12:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:10.508 15:12:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:10.508 15:12:25 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:10.508 15:12:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:10.508 15:12:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.508 15:12:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:10.508 15:12:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:10.508 15:12:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:10.508 15:12:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.508 15:12:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.508 15:12:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.508 15:12:25 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:10.508 15:12:25 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:10.508 15:12:25 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:10.508 15:12:25 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:10.508 15:12:25 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:10.508 15:12:25 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:10.508 15:12:25 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.508 15:12:25 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.508 15:12:25 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:10.508 15:12:25 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:10.508 15:12:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:10.508 15:12:25 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:10.508 15:12:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:10.508 15:12:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.508 15:12:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:10.508 15:12:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:10.508 15:12:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:10.508 15:12:25 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:10.508 15:12:25 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:10.508 15:12:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:10.508 Cannot find device "nvmf_tgt_br" 00:25:10.508 15:12:26 -- nvmf/common.sh@155 -- # true 00:25:10.508 15:12:26 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:10.508 Cannot find device "nvmf_tgt_br2" 00:25:10.508 15:12:26 -- nvmf/common.sh@156 -- # true 00:25:10.508 15:12:26 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:10.508 15:12:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:10.508 Cannot find device "nvmf_tgt_br" 00:25:10.508 15:12:26 -- nvmf/common.sh@158 -- # true 00:25:10.508 15:12:26 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:10.508 Cannot find device "nvmf_tgt_br2" 00:25:10.508 15:12:26 -- nvmf/common.sh@159 -- # true 00:25:10.508 15:12:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:10.508 15:12:26 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:10.508 15:12:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:10.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:10.508 15:12:26 -- nvmf/common.sh@162 -- # true 00:25:10.508 15:12:26 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:10.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:10.508 15:12:26 -- nvmf/common.sh@163 -- # true 00:25:10.508 15:12:26 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:10.508 15:12:26 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:10.768 15:12:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:10.768 15:12:26 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:10.768 15:12:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:10.768 15:12:26 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:10.768 15:12:26 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:10.768 15:12:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:10.768 15:12:26 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:10.768 15:12:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:10.768 15:12:26 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:10.768 15:12:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:10.768 15:12:26 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:10.768 15:12:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:10.768 15:12:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:10.768 15:12:26 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:10.768 15:12:26 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:10.768 15:12:26 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:10.768 15:12:26 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:10.768 15:12:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:10.768 15:12:26 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:10.768 15:12:26 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:10.768 15:12:26 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:10.768 15:12:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:10.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:25:10.768 00:25:10.768 --- 10.0.0.2 ping statistics --- 00:25:10.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.768 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:25:10.768 15:12:26 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:10.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:10.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:25:10.768 00:25:10.768 --- 10.0.0.3 ping statistics --- 00:25:10.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.768 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:25:10.768 15:12:26 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:10.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:25:10.768 00:25:10.768 --- 10.0.0.1 ping statistics --- 00:25:10.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.768 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:25:10.768 15:12:26 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.027 15:12:26 -- nvmf/common.sh@422 -- # return 0 00:25:11.027 15:12:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:11.027 15:12:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.027 15:12:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:11.027 15:12:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:11.027 15:12:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.027 15:12:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:11.027 15:12:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:11.027 15:12:26 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:11.027 15:12:26 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:11.027 15:12:26 -- nvmf/common.sh@717 -- # local ip 00:25:11.027 15:12:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:11.027 15:12:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:11.027 15:12:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.027 15:12:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.027 15:12:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:11.027 15:12:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.027 15:12:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:11.027 15:12:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:11.027 15:12:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:11.027 15:12:26 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:11.027 15:12:26 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:11.027 15:12:26 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:11.027 15:12:26 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:25:11.027 15:12:26 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:11.027 15:12:26 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:11.027 15:12:26 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:11.027 15:12:26 -- nvmf/common.sh@628 -- # local block nvme 00:25:11.027 15:12:26 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:25:11.027 15:12:26 -- nvmf/common.sh@631 -- # modprobe nvmet 00:25:11.027 15:12:26 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:11.027 15:12:26 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:11.595 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:11.595 Waiting for block devices as requested 00:25:11.595 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:11.595 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:11.854 15:12:27 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:11.854 15:12:27 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:11.854 15:12:27 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:25:11.854 15:12:27 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:11.854 15:12:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:11.854 15:12:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:11.854 15:12:27 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:25:11.854 15:12:27 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:11.854 15:12:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:11.854 No valid GPT data, bailing 00:25:11.854 15:12:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:11.854 15:12:27 -- scripts/common.sh@391 -- # pt= 00:25:11.854 15:12:27 -- scripts/common.sh@392 -- # return 1 00:25:11.854 15:12:27 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:25:11.854 15:12:27 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:11.854 15:12:27 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:11.854 15:12:27 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:25:11.854 15:12:27 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:25:11.854 15:12:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:11.854 15:12:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:11.854 15:12:27 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:25:11.854 15:12:27 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:11.855 15:12:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:11.855 No valid GPT data, bailing 00:25:11.855 15:12:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:11.855 15:12:27 -- scripts/common.sh@391 -- # pt= 00:25:11.855 15:12:27 -- scripts/common.sh@392 -- # return 1 00:25:11.855 15:12:27 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:25:11.855 15:12:27 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:11.855 15:12:27 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:11.855 15:12:27 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:25:11.855 15:12:27 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:25:11.855 15:12:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:11.855 15:12:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:11.855 15:12:27 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:25:11.855 15:12:27 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:11.855 15:12:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:11.855 No valid GPT data, bailing 00:25:11.855 15:12:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:12.113 15:12:27 -- scripts/common.sh@391 -- # pt= 00:25:12.113 15:12:27 -- scripts/common.sh@392 -- # return 1 00:25:12.113 15:12:27 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:25:12.113 15:12:27 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:12.113 15:12:27 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:12.113 15:12:27 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:25:12.113 15:12:27 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:25:12.113 15:12:27 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:12.113 15:12:27 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:12.113 15:12:27 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:25:12.113 15:12:27 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:12.113 15:12:27 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:12.113 No valid GPT data, bailing 00:25:12.113 15:12:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:12.113 15:12:27 -- scripts/common.sh@391 -- # pt= 00:25:12.113 15:12:27 -- scripts/common.sh@392 -- # return 1 00:25:12.113 15:12:27 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:25:12.113 15:12:27 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:25:12.113 15:12:27 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:12.113 15:12:27 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:12.113 15:12:27 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:12.113 15:12:27 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:12.113 15:12:27 -- nvmf/common.sh@656 -- # echo 1 00:25:12.113 15:12:27 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:25:12.113 15:12:27 -- nvmf/common.sh@658 -- # echo 1 00:25:12.113 15:12:27 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:25:12.113 15:12:27 -- nvmf/common.sh@661 -- # echo tcp 00:25:12.113 15:12:27 -- nvmf/common.sh@662 -- # echo 4420 00:25:12.113 15:12:27 -- nvmf/common.sh@663 -- # echo ipv4 00:25:12.113 15:12:27 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:12.113 15:12:27 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -a 10.0.0.1 -t tcp -s 4420 00:25:12.113 00:25:12.113 Discovery Log Number of Records 2, Generation counter 2 00:25:12.113 =====Discovery Log Entry 0====== 00:25:12.113 trtype: tcp 00:25:12.113 adrfam: ipv4 00:25:12.113 subtype: current discovery subsystem 00:25:12.113 treq: not specified, sq flow control disable supported 00:25:12.113 portid: 1 00:25:12.113 trsvcid: 4420 00:25:12.113 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:12.113 traddr: 10.0.0.1 00:25:12.113 eflags: none 00:25:12.113 sectype: none 00:25:12.113 =====Discovery Log Entry 1====== 00:25:12.113 trtype: tcp 00:25:12.113 adrfam: ipv4 00:25:12.113 subtype: nvme subsystem 00:25:12.113 treq: not specified, sq flow control disable supported 00:25:12.113 portid: 1 00:25:12.113 trsvcid: 4420 00:25:12.113 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:12.113 traddr: 10.0.0.1 00:25:12.113 eflags: none 00:25:12.113 sectype: none 00:25:12.113 15:12:27 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:12.113 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:12.372 ===================================================== 00:25:12.372 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:12.372 ===================================================== 00:25:12.372 Controller Capabilities/Features 00:25:12.372 ================================ 00:25:12.372 Vendor ID: 0000 00:25:12.372 Subsystem Vendor ID: 0000 00:25:12.372 Serial Number: ea974500dcf9d38e2a4e 00:25:12.372 Model Number: Linux 00:25:12.372 Firmware Version: 6.7.0-68 00:25:12.372 Recommended Arb Burst: 0 00:25:12.372 IEEE OUI Identifier: 00 00 00 00:25:12.372 Multi-path I/O 00:25:12.372 May have multiple subsystem ports: No 00:25:12.372 May have multiple controllers: No 00:25:12.372 Associated with SR-IOV VF: No 00:25:12.372 Max Data Transfer Size: Unlimited 00:25:12.372 Max Number of Namespaces: 0 00:25:12.372 Max Number of I/O Queues: 1024 00:25:12.372 NVMe Specification Version (VS): 1.3 00:25:12.372 NVMe Specification Version (Identify): 1.3 00:25:12.372 Maximum Queue Entries: 1024 00:25:12.372 Contiguous Queues Required: No 00:25:12.372 Arbitration Mechanisms Supported 00:25:12.372 Weighted Round Robin: Not Supported 00:25:12.372 Vendor Specific: Not Supported 00:25:12.372 Reset Timeout: 7500 ms 00:25:12.372 Doorbell Stride: 4 bytes 00:25:12.372 NVM Subsystem Reset: Not Supported 00:25:12.372 Command Sets Supported 00:25:12.372 NVM Command Set: Supported 00:25:12.372 Boot Partition: Not Supported 00:25:12.372 Memory Page Size Minimum: 4096 bytes 00:25:12.372 Memory Page Size Maximum: 4096 bytes 00:25:12.372 Persistent Memory Region: Not Supported 00:25:12.372 Optional Asynchronous Events Supported 00:25:12.372 Namespace Attribute Notices: Not Supported 00:25:12.372 Firmware Activation Notices: Not Supported 00:25:12.372 ANA Change Notices: Not Supported 00:25:12.372 PLE Aggregate Log Change Notices: Not Supported 00:25:12.372 LBA Status Info Alert Notices: Not Supported 00:25:12.372 EGE Aggregate Log Change Notices: Not Supported 00:25:12.372 Normal NVM Subsystem Shutdown event: Not Supported 00:25:12.372 Zone Descriptor Change Notices: Not Supported 00:25:12.372 Discovery Log Change Notices: Supported 00:25:12.372 Controller Attributes 00:25:12.372 128-bit Host Identifier: Not Supported 00:25:12.372 Non-Operational Permissive Mode: Not Supported 00:25:12.372 NVM Sets: Not Supported 00:25:12.372 Read Recovery Levels: Not Supported 00:25:12.372 Endurance Groups: Not Supported 00:25:12.372 Predictable Latency Mode: Not Supported 00:25:12.372 Traffic Based Keep ALive: Not Supported 00:25:12.372 Namespace Granularity: Not Supported 00:25:12.372 SQ Associations: Not Supported 00:25:12.372 UUID List: Not Supported 00:25:12.372 Multi-Domain Subsystem: Not Supported 00:25:12.372 Fixed Capacity Management: Not Supported 00:25:12.372 Variable Capacity Management: Not Supported 00:25:12.372 Delete Endurance Group: Not Supported 00:25:12.372 Delete NVM Set: Not Supported 00:25:12.372 Extended LBA Formats Supported: Not Supported 00:25:12.372 Flexible Data Placement Supported: Not Supported 00:25:12.372 00:25:12.372 Controller Memory Buffer Support 00:25:12.372 ================================ 00:25:12.372 Supported: No 00:25:12.372 00:25:12.372 Persistent Memory Region Support 00:25:12.372 ================================ 00:25:12.372 Supported: No 00:25:12.372 00:25:12.372 Admin Command Set Attributes 00:25:12.372 ============================ 00:25:12.372 Security Send/Receive: Not Supported 00:25:12.372 Format NVM: Not Supported 00:25:12.372 Firmware Activate/Download: Not Supported 00:25:12.372 Namespace Management: Not Supported 00:25:12.372 Device Self-Test: Not Supported 00:25:12.372 Directives: Not Supported 00:25:12.372 NVMe-MI: Not Supported 00:25:12.373 Virtualization Management: Not Supported 00:25:12.373 Doorbell Buffer Config: Not Supported 00:25:12.373 Get LBA Status Capability: Not Supported 00:25:12.373 Command & Feature Lockdown Capability: Not Supported 00:25:12.373 Abort Command Limit: 1 00:25:12.373 Async Event Request Limit: 1 00:25:12.373 Number of Firmware Slots: N/A 00:25:12.373 Firmware Slot 1 Read-Only: N/A 00:25:12.373 Firmware Activation Without Reset: N/A 00:25:12.373 Multiple Update Detection Support: N/A 00:25:12.373 Firmware Update Granularity: No Information Provided 00:25:12.373 Per-Namespace SMART Log: No 00:25:12.373 Asymmetric Namespace Access Log Page: Not Supported 00:25:12.373 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:12.373 Command Effects Log Page: Not Supported 00:25:12.373 Get Log Page Extended Data: Supported 00:25:12.373 Telemetry Log Pages: Not Supported 00:25:12.373 Persistent Event Log Pages: Not Supported 00:25:12.373 Supported Log Pages Log Page: May Support 00:25:12.373 Commands Supported & Effects Log Page: Not Supported 00:25:12.373 Feature Identifiers & Effects Log Page:May Support 00:25:12.373 NVMe-MI Commands & Effects Log Page: May Support 00:25:12.373 Data Area 4 for Telemetry Log: Not Supported 00:25:12.373 Error Log Page Entries Supported: 1 00:25:12.373 Keep Alive: Not Supported 00:25:12.373 00:25:12.373 NVM Command Set Attributes 00:25:12.373 ========================== 00:25:12.373 Submission Queue Entry Size 00:25:12.373 Max: 1 00:25:12.373 Min: 1 00:25:12.373 Completion Queue Entry Size 00:25:12.373 Max: 1 00:25:12.373 Min: 1 00:25:12.373 Number of Namespaces: 0 00:25:12.373 Compare Command: Not Supported 00:25:12.373 Write Uncorrectable Command: Not Supported 00:25:12.373 Dataset Management Command: Not Supported 00:25:12.373 Write Zeroes Command: Not Supported 00:25:12.373 Set Features Save Field: Not Supported 00:25:12.373 Reservations: Not Supported 00:25:12.373 Timestamp: Not Supported 00:25:12.373 Copy: Not Supported 00:25:12.373 Volatile Write Cache: Not Present 00:25:12.373 Atomic Write Unit (Normal): 1 00:25:12.373 Atomic Write Unit (PFail): 1 00:25:12.373 Atomic Compare & Write Unit: 1 00:25:12.373 Fused Compare & Write: Not Supported 00:25:12.373 Scatter-Gather List 00:25:12.373 SGL Command Set: Supported 00:25:12.373 SGL Keyed: Not Supported 00:25:12.373 SGL Bit Bucket Descriptor: Not Supported 00:25:12.373 SGL Metadata Pointer: Not Supported 00:25:12.373 Oversized SGL: Not Supported 00:25:12.373 SGL Metadata Address: Not Supported 00:25:12.373 SGL Offset: Supported 00:25:12.373 Transport SGL Data Block: Not Supported 00:25:12.373 Replay Protected Memory Block: Not Supported 00:25:12.373 00:25:12.373 Firmware Slot Information 00:25:12.373 ========================= 00:25:12.373 Active slot: 0 00:25:12.373 00:25:12.373 00:25:12.373 Error Log 00:25:12.373 ========= 00:25:12.373 00:25:12.373 Active Namespaces 00:25:12.373 ================= 00:25:12.373 Discovery Log Page 00:25:12.373 ================== 00:25:12.373 Generation Counter: 2 00:25:12.373 Number of Records: 2 00:25:12.373 Record Format: 0 00:25:12.373 00:25:12.373 Discovery Log Entry 0 00:25:12.373 ---------------------- 00:25:12.373 Transport Type: 3 (TCP) 00:25:12.373 Address Family: 1 (IPv4) 00:25:12.373 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:12.373 Entry Flags: 00:25:12.373 Duplicate Returned Information: 0 00:25:12.373 Explicit Persistent Connection Support for Discovery: 0 00:25:12.373 Transport Requirements: 00:25:12.373 Secure Channel: Not Specified 00:25:12.373 Port ID: 1 (0x0001) 00:25:12.373 Controller ID: 65535 (0xffff) 00:25:12.373 Admin Max SQ Size: 32 00:25:12.373 Transport Service Identifier: 4420 00:25:12.373 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:12.373 Transport Address: 10.0.0.1 00:25:12.373 Discovery Log Entry 1 00:25:12.373 ---------------------- 00:25:12.373 Transport Type: 3 (TCP) 00:25:12.373 Address Family: 1 (IPv4) 00:25:12.373 Subsystem Type: 2 (NVM Subsystem) 00:25:12.373 Entry Flags: 00:25:12.373 Duplicate Returned Information: 0 00:25:12.373 Explicit Persistent Connection Support for Discovery: 0 00:25:12.373 Transport Requirements: 00:25:12.373 Secure Channel: Not Specified 00:25:12.373 Port ID: 1 (0x0001) 00:25:12.373 Controller ID: 65535 (0xffff) 00:25:12.373 Admin Max SQ Size: 32 00:25:12.373 Transport Service Identifier: 4420 00:25:12.373 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:12.373 Transport Address: 10.0.0.1 00:25:12.373 15:12:27 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:12.632 get_feature(0x01) failed 00:25:12.632 get_feature(0x02) failed 00:25:12.632 get_feature(0x04) failed 00:25:12.632 ===================================================== 00:25:12.632 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:12.632 ===================================================== 00:25:12.632 Controller Capabilities/Features 00:25:12.632 ================================ 00:25:12.632 Vendor ID: 0000 00:25:12.632 Subsystem Vendor ID: 0000 00:25:12.632 Serial Number: 3251e091b7b7d2896d45 00:25:12.632 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:12.632 Firmware Version: 6.7.0-68 00:25:12.632 Recommended Arb Burst: 6 00:25:12.632 IEEE OUI Identifier: 00 00 00 00:25:12.632 Multi-path I/O 00:25:12.632 May have multiple subsystem ports: Yes 00:25:12.632 May have multiple controllers: Yes 00:25:12.632 Associated with SR-IOV VF: No 00:25:12.632 Max Data Transfer Size: Unlimited 00:25:12.632 Max Number of Namespaces: 1024 00:25:12.632 Max Number of I/O Queues: 128 00:25:12.632 NVMe Specification Version (VS): 1.3 00:25:12.632 NVMe Specification Version (Identify): 1.3 00:25:12.632 Maximum Queue Entries: 1024 00:25:12.632 Contiguous Queues Required: No 00:25:12.632 Arbitration Mechanisms Supported 00:25:12.632 Weighted Round Robin: Not Supported 00:25:12.632 Vendor Specific: Not Supported 00:25:12.632 Reset Timeout: 7500 ms 00:25:12.632 Doorbell Stride: 4 bytes 00:25:12.632 NVM Subsystem Reset: Not Supported 00:25:12.632 Command Sets Supported 00:25:12.632 NVM Command Set: Supported 00:25:12.632 Boot Partition: Not Supported 00:25:12.632 Memory Page Size Minimum: 4096 bytes 00:25:12.632 Memory Page Size Maximum: 4096 bytes 00:25:12.632 Persistent Memory Region: Not Supported 00:25:12.632 Optional Asynchronous Events Supported 00:25:12.632 Namespace Attribute Notices: Supported 00:25:12.632 Firmware Activation Notices: Not Supported 00:25:12.632 ANA Change Notices: Supported 00:25:12.632 PLE Aggregate Log Change Notices: Not Supported 00:25:12.632 LBA Status Info Alert Notices: Not Supported 00:25:12.632 EGE Aggregate Log Change Notices: Not Supported 00:25:12.632 Normal NVM Subsystem Shutdown event: Not Supported 00:25:12.632 Zone Descriptor Change Notices: Not Supported 00:25:12.632 Discovery Log Change Notices: Not Supported 00:25:12.632 Controller Attributes 00:25:12.632 128-bit Host Identifier: Supported 00:25:12.632 Non-Operational Permissive Mode: Not Supported 00:25:12.632 NVM Sets: Not Supported 00:25:12.632 Read Recovery Levels: Not Supported 00:25:12.632 Endurance Groups: Not Supported 00:25:12.632 Predictable Latency Mode: Not Supported 00:25:12.632 Traffic Based Keep ALive: Supported 00:25:12.632 Namespace Granularity: Not Supported 00:25:12.632 SQ Associations: Not Supported 00:25:12.632 UUID List: Not Supported 00:25:12.632 Multi-Domain Subsystem: Not Supported 00:25:12.632 Fixed Capacity Management: Not Supported 00:25:12.632 Variable Capacity Management: Not Supported 00:25:12.632 Delete Endurance Group: Not Supported 00:25:12.632 Delete NVM Set: Not Supported 00:25:12.632 Extended LBA Formats Supported: Not Supported 00:25:12.632 Flexible Data Placement Supported: Not Supported 00:25:12.632 00:25:12.632 Controller Memory Buffer Support 00:25:12.632 ================================ 00:25:12.632 Supported: No 00:25:12.632 00:25:12.632 Persistent Memory Region Support 00:25:12.632 ================================ 00:25:12.632 Supported: No 00:25:12.632 00:25:12.632 Admin Command Set Attributes 00:25:12.632 ============================ 00:25:12.632 Security Send/Receive: Not Supported 00:25:12.633 Format NVM: Not Supported 00:25:12.633 Firmware Activate/Download: Not Supported 00:25:12.633 Namespace Management: Not Supported 00:25:12.633 Device Self-Test: Not Supported 00:25:12.633 Directives: Not Supported 00:25:12.633 NVMe-MI: Not Supported 00:25:12.633 Virtualization Management: Not Supported 00:25:12.633 Doorbell Buffer Config: Not Supported 00:25:12.633 Get LBA Status Capability: Not Supported 00:25:12.633 Command & Feature Lockdown Capability: Not Supported 00:25:12.633 Abort Command Limit: 4 00:25:12.633 Async Event Request Limit: 4 00:25:12.633 Number of Firmware Slots: N/A 00:25:12.633 Firmware Slot 1 Read-Only: N/A 00:25:12.633 Firmware Activation Without Reset: N/A 00:25:12.633 Multiple Update Detection Support: N/A 00:25:12.633 Firmware Update Granularity: No Information Provided 00:25:12.633 Per-Namespace SMART Log: Yes 00:25:12.633 Asymmetric Namespace Access Log Page: Supported 00:25:12.633 ANA Transition Time : 10 sec 00:25:12.633 00:25:12.633 Asymmetric Namespace Access Capabilities 00:25:12.633 ANA Optimized State : Supported 00:25:12.633 ANA Non-Optimized State : Supported 00:25:12.633 ANA Inaccessible State : Supported 00:25:12.633 ANA Persistent Loss State : Supported 00:25:12.633 ANA Change State : Supported 00:25:12.633 ANAGRPID is not changed : No 00:25:12.633 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:12.633 00:25:12.633 ANA Group Identifier Maximum : 128 00:25:12.633 Number of ANA Group Identifiers : 128 00:25:12.633 Max Number of Allowed Namespaces : 1024 00:25:12.633 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:12.633 Command Effects Log Page: Supported 00:25:12.633 Get Log Page Extended Data: Supported 00:25:12.633 Telemetry Log Pages: Not Supported 00:25:12.633 Persistent Event Log Pages: Not Supported 00:25:12.633 Supported Log Pages Log Page: May Support 00:25:12.633 Commands Supported & Effects Log Page: Not Supported 00:25:12.633 Feature Identifiers & Effects Log Page:May Support 00:25:12.633 NVMe-MI Commands & Effects Log Page: May Support 00:25:12.633 Data Area 4 for Telemetry Log: Not Supported 00:25:12.633 Error Log Page Entries Supported: 128 00:25:12.633 Keep Alive: Supported 00:25:12.633 Keep Alive Granularity: 1000 ms 00:25:12.633 00:25:12.633 NVM Command Set Attributes 00:25:12.633 ========================== 00:25:12.633 Submission Queue Entry Size 00:25:12.633 Max: 64 00:25:12.633 Min: 64 00:25:12.633 Completion Queue Entry Size 00:25:12.633 Max: 16 00:25:12.633 Min: 16 00:25:12.633 Number of Namespaces: 1024 00:25:12.633 Compare Command: Not Supported 00:25:12.633 Write Uncorrectable Command: Not Supported 00:25:12.633 Dataset Management Command: Supported 00:25:12.633 Write Zeroes Command: Supported 00:25:12.633 Set Features Save Field: Not Supported 00:25:12.633 Reservations: Not Supported 00:25:12.633 Timestamp: Not Supported 00:25:12.633 Copy: Not Supported 00:25:12.633 Volatile Write Cache: Present 00:25:12.633 Atomic Write Unit (Normal): 1 00:25:12.633 Atomic Write Unit (PFail): 1 00:25:12.633 Atomic Compare & Write Unit: 1 00:25:12.633 Fused Compare & Write: Not Supported 00:25:12.633 Scatter-Gather List 00:25:12.633 SGL Command Set: Supported 00:25:12.633 SGL Keyed: Not Supported 00:25:12.633 SGL Bit Bucket Descriptor: Not Supported 00:25:12.633 SGL Metadata Pointer: Not Supported 00:25:12.633 Oversized SGL: Not Supported 00:25:12.633 SGL Metadata Address: Not Supported 00:25:12.633 SGL Offset: Supported 00:25:12.633 Transport SGL Data Block: Not Supported 00:25:12.633 Replay Protected Memory Block: Not Supported 00:25:12.633 00:25:12.633 Firmware Slot Information 00:25:12.633 ========================= 00:25:12.633 Active slot: 0 00:25:12.633 00:25:12.633 Asymmetric Namespace Access 00:25:12.633 =========================== 00:25:12.633 Change Count : 0 00:25:12.633 Number of ANA Group Descriptors : 1 00:25:12.633 ANA Group Descriptor : 0 00:25:12.633 ANA Group ID : 1 00:25:12.633 Number of NSID Values : 1 00:25:12.633 Change Count : 0 00:25:12.633 ANA State : 1 00:25:12.633 Namespace Identifier : 1 00:25:12.633 00:25:12.633 Commands Supported and Effects 00:25:12.633 ============================== 00:25:12.633 Admin Commands 00:25:12.633 -------------- 00:25:12.633 Get Log Page (02h): Supported 00:25:12.633 Identify (06h): Supported 00:25:12.633 Abort (08h): Supported 00:25:12.633 Set Features (09h): Supported 00:25:12.633 Get Features (0Ah): Supported 00:25:12.633 Asynchronous Event Request (0Ch): Supported 00:25:12.633 Keep Alive (18h): Supported 00:25:12.633 I/O Commands 00:25:12.633 ------------ 00:25:12.633 Flush (00h): Supported 00:25:12.633 Write (01h): Supported LBA-Change 00:25:12.633 Read (02h): Supported 00:25:12.633 Write Zeroes (08h): Supported LBA-Change 00:25:12.633 Dataset Management (09h): Supported 00:25:12.633 00:25:12.633 Error Log 00:25:12.633 ========= 00:25:12.633 Entry: 0 00:25:12.633 Error Count: 0x3 00:25:12.633 Submission Queue Id: 0x0 00:25:12.633 Command Id: 0x5 00:25:12.633 Phase Bit: 0 00:25:12.633 Status Code: 0x2 00:25:12.633 Status Code Type: 0x0 00:25:12.633 Do Not Retry: 1 00:25:12.633 Error Location: 0x28 00:25:12.633 LBA: 0x0 00:25:12.633 Namespace: 0x0 00:25:12.633 Vendor Log Page: 0x0 00:25:12.633 ----------- 00:25:12.633 Entry: 1 00:25:12.633 Error Count: 0x2 00:25:12.633 Submission Queue Id: 0x0 00:25:12.633 Command Id: 0x5 00:25:12.633 Phase Bit: 0 00:25:12.633 Status Code: 0x2 00:25:12.633 Status Code Type: 0x0 00:25:12.633 Do Not Retry: 1 00:25:12.633 Error Location: 0x28 00:25:12.633 LBA: 0x0 00:25:12.633 Namespace: 0x0 00:25:12.633 Vendor Log Page: 0x0 00:25:12.633 ----------- 00:25:12.633 Entry: 2 00:25:12.633 Error Count: 0x1 00:25:12.633 Submission Queue Id: 0x0 00:25:12.633 Command Id: 0x4 00:25:12.633 Phase Bit: 0 00:25:12.633 Status Code: 0x2 00:25:12.633 Status Code Type: 0x0 00:25:12.633 Do Not Retry: 1 00:25:12.633 Error Location: 0x28 00:25:12.633 LBA: 0x0 00:25:12.633 Namespace: 0x0 00:25:12.633 Vendor Log Page: 0x0 00:25:12.633 00:25:12.633 Number of Queues 00:25:12.633 ================ 00:25:12.633 Number of I/O Submission Queues: 128 00:25:12.633 Number of I/O Completion Queues: 128 00:25:12.633 00:25:12.633 ZNS Specific Controller Data 00:25:12.633 ============================ 00:25:12.633 Zone Append Size Limit: 0 00:25:12.633 00:25:12.633 00:25:12.633 Active Namespaces 00:25:12.633 ================= 00:25:12.633 get_feature(0x05) failed 00:25:12.633 Namespace ID:1 00:25:12.633 Command Set Identifier: NVM (00h) 00:25:12.633 Deallocate: Supported 00:25:12.633 Deallocated/Unwritten Error: Not Supported 00:25:12.633 Deallocated Read Value: Unknown 00:25:12.633 Deallocate in Write Zeroes: Not Supported 00:25:12.633 Deallocated Guard Field: 0xFFFF 00:25:12.633 Flush: Supported 00:25:12.633 Reservation: Not Supported 00:25:12.633 Namespace Sharing Capabilities: Multiple Controllers 00:25:12.633 Size (in LBAs): 1310720 (5GiB) 00:25:12.633 Capacity (in LBAs): 1310720 (5GiB) 00:25:12.633 Utilization (in LBAs): 1310720 (5GiB) 00:25:12.633 UUID: f19d40eb-c2a3-4a84-a7b2-caf68d2a8b7a 00:25:12.633 Thin Provisioning: Not Supported 00:25:12.633 Per-NS Atomic Units: Yes 00:25:12.633 Atomic Boundary Size (Normal): 0 00:25:12.633 Atomic Boundary Size (PFail): 0 00:25:12.633 Atomic Boundary Offset: 0 00:25:12.633 NGUID/EUI64 Never Reused: No 00:25:12.633 ANA group ID: 1 00:25:12.633 Namespace Write Protected: No 00:25:12.633 Number of LBA Formats: 1 00:25:12.633 Current LBA Format: LBA Format #00 00:25:12.633 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:25:12.633 00:25:12.633 15:12:28 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:12.633 15:12:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:12.633 15:12:28 -- nvmf/common.sh@117 -- # sync 00:25:12.633 15:12:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:12.633 15:12:28 -- nvmf/common.sh@120 -- # set +e 00:25:12.633 15:12:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:12.633 15:12:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:12.633 rmmod nvme_tcp 00:25:12.633 rmmod nvme_fabrics 00:25:12.633 15:12:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:12.633 15:12:28 -- nvmf/common.sh@124 -- # set -e 00:25:12.634 15:12:28 -- nvmf/common.sh@125 -- # return 0 00:25:12.634 15:12:28 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:25:12.634 15:12:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:12.634 15:12:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:12.634 15:12:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:12.634 15:12:28 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:12.634 15:12:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:12.634 15:12:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.634 15:12:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.634 15:12:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.634 15:12:28 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:12.634 15:12:28 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:12.634 15:12:28 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:12.634 15:12:28 -- nvmf/common.sh@675 -- # echo 0 00:25:12.634 15:12:28 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:12.634 15:12:28 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:12.634 15:12:28 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:12.634 15:12:28 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:12.634 15:12:28 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:25:12.634 15:12:28 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:25:12.634 15:12:28 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:13.568 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:13.827 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:13.827 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:13.827 ************************************ 00:25:13.827 END TEST nvmf_identify_kernel_target 00:25:13.827 ************************************ 00:25:13.827 00:25:13.827 real 0m3.633s 00:25:13.827 user 0m1.176s 00:25:13.827 sys 0m1.983s 00:25:13.827 15:12:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:13.827 15:12:29 -- common/autotest_common.sh@10 -- # set +x 00:25:13.827 15:12:29 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:13.827 15:12:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:13.827 15:12:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:13.827 15:12:29 -- common/autotest_common.sh@10 -- # set +x 00:25:14.086 ************************************ 00:25:14.086 START TEST nvmf_auth 00:25:14.086 ************************************ 00:25:14.086 15:12:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:14.086 * Looking for test storage... 00:25:14.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:14.086 15:12:29 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:14.086 15:12:29 -- nvmf/common.sh@7 -- # uname -s 00:25:14.086 15:12:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.087 15:12:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.087 15:12:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.087 15:12:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.087 15:12:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.087 15:12:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.087 15:12:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.087 15:12:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.087 15:12:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.087 15:12:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.087 15:12:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:25:14.087 15:12:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:25:14.087 15:12:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.087 15:12:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.087 15:12:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:14.087 15:12:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.087 15:12:29 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:14.087 15:12:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.087 15:12:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.087 15:12:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.087 15:12:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.087 15:12:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.087 15:12:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.087 15:12:29 -- paths/export.sh@5 -- # export PATH 00:25:14.087 15:12:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.087 15:12:29 -- nvmf/common.sh@47 -- # : 0 00:25:14.087 15:12:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:14.087 15:12:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:14.087 15:12:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.087 15:12:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.087 15:12:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.087 15:12:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:14.087 15:12:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:14.087 15:12:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:14.087 15:12:29 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:14.087 15:12:29 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:14.087 15:12:29 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:14.087 15:12:29 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:14.087 15:12:29 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:14.087 15:12:29 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:14.087 15:12:29 -- host/auth.sh@21 -- # keys=() 00:25:14.087 15:12:29 -- host/auth.sh@77 -- # nvmftestinit 00:25:14.087 15:12:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:14.087 15:12:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.087 15:12:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:14.087 15:12:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:14.087 15:12:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:14.087 15:12:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.087 15:12:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.087 15:12:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.087 15:12:29 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:14.087 15:12:29 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:14.087 15:12:29 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:14.087 15:12:29 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:14.087 15:12:29 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:14.087 15:12:29 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:14.087 15:12:29 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.087 15:12:29 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.087 15:12:29 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:14.087 15:12:29 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:14.087 15:12:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:14.087 15:12:29 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:14.087 15:12:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:14.087 15:12:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.087 15:12:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:14.087 15:12:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:14.087 15:12:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:14.087 15:12:29 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:14.087 15:12:29 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:14.087 15:12:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:14.347 Cannot find device "nvmf_tgt_br" 00:25:14.347 15:12:29 -- nvmf/common.sh@155 -- # true 00:25:14.347 15:12:29 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:14.347 Cannot find device "nvmf_tgt_br2" 00:25:14.347 15:12:29 -- nvmf/common.sh@156 -- # true 00:25:14.347 15:12:29 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:14.347 15:12:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:14.347 Cannot find device "nvmf_tgt_br" 00:25:14.347 15:12:29 -- nvmf/common.sh@158 -- # true 00:25:14.347 15:12:29 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:14.347 Cannot find device "nvmf_tgt_br2" 00:25:14.347 15:12:29 -- nvmf/common.sh@159 -- # true 00:25:14.347 15:12:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:14.347 15:12:29 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:14.347 15:12:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:14.347 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:14.347 15:12:29 -- nvmf/common.sh@162 -- # true 00:25:14.347 15:12:29 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:14.347 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:14.347 15:12:29 -- nvmf/common.sh@163 -- # true 00:25:14.347 15:12:29 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:14.348 15:12:29 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:14.348 15:12:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:14.348 15:12:29 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:14.348 15:12:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:14.348 15:12:29 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:14.348 15:12:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:14.348 15:12:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:14.348 15:12:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:14.348 15:12:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:14.348 15:12:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:14.348 15:12:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:14.348 15:12:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:14.348 15:12:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:14.607 15:12:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:14.607 15:12:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:14.607 15:12:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:14.607 15:12:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:14.607 15:12:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:14.607 15:12:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:14.607 15:12:30 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:14.607 15:12:30 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:14.607 15:12:30 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:14.607 15:12:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:14.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:25:14.607 00:25:14.607 --- 10.0.0.2 ping statistics --- 00:25:14.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.607 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:25:14.607 15:12:30 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:14.607 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:14.607 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:25:14.607 00:25:14.607 --- 10.0.0.3 ping statistics --- 00:25:14.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.607 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:25:14.607 15:12:30 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:14.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:25:14.607 00:25:14.607 --- 10.0.0.1 ping statistics --- 00:25:14.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.607 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:25:14.607 15:12:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.607 15:12:30 -- nvmf/common.sh@422 -- # return 0 00:25:14.607 15:12:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:14.607 15:12:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.607 15:12:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:14.607 15:12:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:14.607 15:12:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.607 15:12:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:14.607 15:12:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:14.607 15:12:30 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:25:14.607 15:12:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:14.607 15:12:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:14.607 15:12:30 -- common/autotest_common.sh@10 -- # set +x 00:25:14.607 15:12:30 -- nvmf/common.sh@470 -- # nvmfpid=83575 00:25:14.607 15:12:30 -- nvmf/common.sh@471 -- # waitforlisten 83575 00:25:14.607 15:12:30 -- common/autotest_common.sh@817 -- # '[' -z 83575 ']' 00:25:14.607 15:12:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.607 15:12:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:14.607 15:12:30 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:14.607 15:12:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.607 15:12:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:14.607 15:12:30 -- common/autotest_common.sh@10 -- # set +x 00:25:15.547 15:12:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:15.547 15:12:31 -- common/autotest_common.sh@850 -- # return 0 00:25:15.547 15:12:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:15.547 15:12:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:15.547 15:12:31 -- common/autotest_common.sh@10 -- # set +x 00:25:15.547 15:12:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.547 15:12:31 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:15.547 15:12:31 -- host/auth.sh@81 -- # gen_key null 32 00:25:15.547 15:12:31 -- host/auth.sh@53 -- # local digest len file key 00:25:15.547 15:12:31 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:15.547 15:12:31 -- host/auth.sh@54 -- # local -A digests 00:25:15.547 15:12:31 -- host/auth.sh@56 -- # digest=null 00:25:15.547 15:12:31 -- host/auth.sh@56 -- # len=32 00:25:15.547 15:12:31 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:15.547 15:12:31 -- host/auth.sh@57 -- # key=74f8889ffe9729446480ea8ef9807e8a 00:25:15.547 15:12:31 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:25:15.547 15:12:31 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.JTH 00:25:15.547 15:12:31 -- host/auth.sh@59 -- # format_dhchap_key 74f8889ffe9729446480ea8ef9807e8a 0 00:25:15.547 15:12:31 -- nvmf/common.sh@708 -- # format_key DHHC-1 74f8889ffe9729446480ea8ef9807e8a 0 00:25:15.547 15:12:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:15.547 15:12:31 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:15.547 15:12:31 -- nvmf/common.sh@693 -- # key=74f8889ffe9729446480ea8ef9807e8a 00:25:15.547 15:12:31 -- nvmf/common.sh@693 -- # digest=0 00:25:15.547 15:12:31 -- nvmf/common.sh@694 -- # python - 00:25:15.806 15:12:31 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.JTH 00:25:15.806 15:12:31 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.JTH 00:25:15.806 15:12:31 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.JTH 00:25:15.806 15:12:31 -- host/auth.sh@82 -- # gen_key null 48 00:25:15.806 15:12:31 -- host/auth.sh@53 -- # local digest len file key 00:25:15.806 15:12:31 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:15.806 15:12:31 -- host/auth.sh@54 -- # local -A digests 00:25:15.806 15:12:31 -- host/auth.sh@56 -- # digest=null 00:25:15.806 15:12:31 -- host/auth.sh@56 -- # len=48 00:25:15.806 15:12:31 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:15.806 15:12:31 -- host/auth.sh@57 -- # key=4d6a39e70d2dae534545aef5297eaa77ce0a5ca9e399275f 00:25:15.806 15:12:31 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:25:15.806 15:12:31 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.iGK 00:25:15.806 15:12:31 -- host/auth.sh@59 -- # format_dhchap_key 4d6a39e70d2dae534545aef5297eaa77ce0a5ca9e399275f 0 00:25:15.806 15:12:31 -- nvmf/common.sh@708 -- # format_key DHHC-1 4d6a39e70d2dae534545aef5297eaa77ce0a5ca9e399275f 0 00:25:15.806 15:12:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:15.806 15:12:31 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:15.806 15:12:31 -- nvmf/common.sh@693 -- # key=4d6a39e70d2dae534545aef5297eaa77ce0a5ca9e399275f 00:25:15.806 15:12:31 -- nvmf/common.sh@693 -- # digest=0 00:25:15.806 15:12:31 -- nvmf/common.sh@694 -- # python - 00:25:15.806 15:12:31 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.iGK 00:25:15.806 15:12:31 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.iGK 00:25:15.806 15:12:31 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.iGK 00:25:15.806 15:12:31 -- host/auth.sh@83 -- # gen_key sha256 32 00:25:15.806 15:12:31 -- host/auth.sh@53 -- # local digest len file key 00:25:15.806 15:12:31 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:15.806 15:12:31 -- host/auth.sh@54 -- # local -A digests 00:25:15.806 15:12:31 -- host/auth.sh@56 -- # digest=sha256 00:25:15.806 15:12:31 -- host/auth.sh@56 -- # len=32 00:25:15.806 15:12:31 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:15.806 15:12:31 -- host/auth.sh@57 -- # key=ce4798e77d8141a4c3f32c57b6d559ba 00:25:15.806 15:12:31 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:25:15.806 15:12:31 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.ktP 00:25:15.806 15:12:31 -- host/auth.sh@59 -- # format_dhchap_key ce4798e77d8141a4c3f32c57b6d559ba 1 00:25:15.806 15:12:31 -- nvmf/common.sh@708 -- # format_key DHHC-1 ce4798e77d8141a4c3f32c57b6d559ba 1 00:25:15.806 15:12:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:15.806 15:12:31 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:15.806 15:12:31 -- nvmf/common.sh@693 -- # key=ce4798e77d8141a4c3f32c57b6d559ba 00:25:15.806 15:12:31 -- nvmf/common.sh@693 -- # digest=1 00:25:15.806 15:12:31 -- nvmf/common.sh@694 -- # python - 00:25:15.806 15:12:31 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.ktP 00:25:15.806 15:12:31 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.ktP 00:25:15.806 15:12:31 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.ktP 00:25:15.806 15:12:31 -- host/auth.sh@84 -- # gen_key sha384 48 00:25:15.806 15:12:31 -- host/auth.sh@53 -- # local digest len file key 00:25:15.807 15:12:31 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:15.807 15:12:31 -- host/auth.sh@54 -- # local -A digests 00:25:15.807 15:12:31 -- host/auth.sh@56 -- # digest=sha384 00:25:15.807 15:12:31 -- host/auth.sh@56 -- # len=48 00:25:15.807 15:12:31 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:15.807 15:12:31 -- host/auth.sh@57 -- # key=7ed981cd11e8458281f411fbf5dd51a4cb5f1e4e12edadcd 00:25:15.807 15:12:31 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:25:15.807 15:12:31 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.uaX 00:25:15.807 15:12:31 -- host/auth.sh@59 -- # format_dhchap_key 7ed981cd11e8458281f411fbf5dd51a4cb5f1e4e12edadcd 2 00:25:15.807 15:12:31 -- nvmf/common.sh@708 -- # format_key DHHC-1 7ed981cd11e8458281f411fbf5dd51a4cb5f1e4e12edadcd 2 00:25:15.807 15:12:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:15.807 15:12:31 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:15.807 15:12:31 -- nvmf/common.sh@693 -- # key=7ed981cd11e8458281f411fbf5dd51a4cb5f1e4e12edadcd 00:25:15.807 15:12:31 -- nvmf/common.sh@693 -- # digest=2 00:25:15.807 15:12:31 -- nvmf/common.sh@694 -- # python - 00:25:15.807 15:12:31 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.uaX 00:25:15.807 15:12:31 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.uaX 00:25:15.807 15:12:31 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.uaX 00:25:15.807 15:12:31 -- host/auth.sh@85 -- # gen_key sha512 64 00:25:15.807 15:12:31 -- host/auth.sh@53 -- # local digest len file key 00:25:15.807 15:12:31 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:15.807 15:12:31 -- host/auth.sh@54 -- # local -A digests 00:25:15.807 15:12:31 -- host/auth.sh@56 -- # digest=sha512 00:25:15.807 15:12:31 -- host/auth.sh@56 -- # len=64 00:25:15.807 15:12:31 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:15.807 15:12:31 -- host/auth.sh@57 -- # key=4d0d10966251a6c2f9ff8f2148488763350eb42ae8a8cbdd1cdf351a81c4a0f8 00:25:15.807 15:12:31 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:25:16.066 15:12:31 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.Hqn 00:25:16.066 15:12:31 -- host/auth.sh@59 -- # format_dhchap_key 4d0d10966251a6c2f9ff8f2148488763350eb42ae8a8cbdd1cdf351a81c4a0f8 3 00:25:16.066 15:12:31 -- nvmf/common.sh@708 -- # format_key DHHC-1 4d0d10966251a6c2f9ff8f2148488763350eb42ae8a8cbdd1cdf351a81c4a0f8 3 00:25:16.066 15:12:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:16.066 15:12:31 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:16.066 15:12:31 -- nvmf/common.sh@693 -- # key=4d0d10966251a6c2f9ff8f2148488763350eb42ae8a8cbdd1cdf351a81c4a0f8 00:25:16.066 15:12:31 -- nvmf/common.sh@693 -- # digest=3 00:25:16.066 15:12:31 -- nvmf/common.sh@694 -- # python - 00:25:16.066 15:12:31 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.Hqn 00:25:16.066 15:12:31 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.Hqn 00:25:16.066 15:12:31 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.Hqn 00:25:16.066 15:12:31 -- host/auth.sh@87 -- # waitforlisten 83575 00:25:16.066 15:12:31 -- common/autotest_common.sh@817 -- # '[' -z 83575 ']' 00:25:16.066 15:12:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.066 15:12:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:16.066 15:12:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.066 15:12:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:16.066 15:12:31 -- common/autotest_common.sh@10 -- # set +x 00:25:16.325 15:12:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:16.325 15:12:31 -- common/autotest_common.sh@850 -- # return 0 00:25:16.325 15:12:31 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:16.325 15:12:31 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.JTH 00:25:16.325 15:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.326 15:12:31 -- common/autotest_common.sh@10 -- # set +x 00:25:16.326 15:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.326 15:12:31 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:16.326 15:12:31 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.iGK 00:25:16.326 15:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.326 15:12:31 -- common/autotest_common.sh@10 -- # set +x 00:25:16.326 15:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.326 15:12:31 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:16.326 15:12:31 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ktP 00:25:16.326 15:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.326 15:12:31 -- common/autotest_common.sh@10 -- # set +x 00:25:16.326 15:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.326 15:12:31 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:16.326 15:12:31 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.uaX 00:25:16.326 15:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.326 15:12:31 -- common/autotest_common.sh@10 -- # set +x 00:25:16.326 15:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.326 15:12:31 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:16.326 15:12:31 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Hqn 00:25:16.326 15:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:16.326 15:12:31 -- common/autotest_common.sh@10 -- # set +x 00:25:16.326 15:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:16.326 15:12:31 -- host/auth.sh@92 -- # nvmet_auth_init 00:25:16.326 15:12:31 -- host/auth.sh@35 -- # get_main_ns_ip 00:25:16.326 15:12:31 -- nvmf/common.sh@717 -- # local ip 00:25:16.326 15:12:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:16.326 15:12:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:16.326 15:12:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.326 15:12:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.326 15:12:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:16.326 15:12:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.326 15:12:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:16.326 15:12:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:16.326 15:12:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:16.326 15:12:31 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:16.326 15:12:31 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:16.326 15:12:31 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:25:16.326 15:12:31 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:16.326 15:12:31 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:16.326 15:12:31 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:16.326 15:12:31 -- nvmf/common.sh@628 -- # local block nvme 00:25:16.326 15:12:31 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:25:16.326 15:12:31 -- nvmf/common.sh@631 -- # modprobe nvmet 00:25:16.326 15:12:31 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:16.326 15:12:31 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:16.894 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:16.894 Waiting for block devices as requested 00:25:16.894 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:17.153 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:18.090 15:12:33 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:18.090 15:12:33 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:18.090 15:12:33 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:25:18.090 15:12:33 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:18.090 15:12:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:18.090 15:12:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:18.090 15:12:33 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:25:18.090 15:12:33 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:18.090 15:12:33 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:18.090 No valid GPT data, bailing 00:25:18.090 15:12:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:18.090 15:12:33 -- scripts/common.sh@391 -- # pt= 00:25:18.090 15:12:33 -- scripts/common.sh@392 -- # return 1 00:25:18.090 15:12:33 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:25:18.090 15:12:33 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:18.090 15:12:33 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:18.090 15:12:33 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:25:18.090 15:12:33 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:25:18.090 15:12:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:18.090 15:12:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:18.090 15:12:33 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:25:18.090 15:12:33 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:18.090 15:12:33 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:18.090 No valid GPT data, bailing 00:25:18.090 15:12:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:18.090 15:12:33 -- scripts/common.sh@391 -- # pt= 00:25:18.090 15:12:33 -- scripts/common.sh@392 -- # return 1 00:25:18.090 15:12:33 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:25:18.090 15:12:33 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:18.090 15:12:33 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:18.090 15:12:33 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:25:18.090 15:12:33 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:25:18.090 15:12:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:18.090 15:12:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:18.090 15:12:33 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:25:18.090 15:12:33 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:18.090 15:12:33 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:18.090 No valid GPT data, bailing 00:25:18.090 15:12:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:18.090 15:12:33 -- scripts/common.sh@391 -- # pt= 00:25:18.090 15:12:33 -- scripts/common.sh@392 -- # return 1 00:25:18.090 15:12:33 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:25:18.090 15:12:33 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:18.090 15:12:33 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:18.090 15:12:33 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:25:18.090 15:12:33 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:25:18.090 15:12:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:18.090 15:12:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:18.090 15:12:33 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:25:18.090 15:12:33 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:18.090 15:12:33 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:18.090 No valid GPT data, bailing 00:25:18.090 15:12:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:18.349 15:12:33 -- scripts/common.sh@391 -- # pt= 00:25:18.349 15:12:33 -- scripts/common.sh@392 -- # return 1 00:25:18.349 15:12:33 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:25:18.349 15:12:33 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:25:18.349 15:12:33 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:18.349 15:12:33 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:18.349 15:12:33 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:18.349 15:12:33 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:18.349 15:12:33 -- nvmf/common.sh@656 -- # echo 1 00:25:18.349 15:12:33 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:25:18.349 15:12:33 -- nvmf/common.sh@658 -- # echo 1 00:25:18.349 15:12:33 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:25:18.349 15:12:33 -- nvmf/common.sh@661 -- # echo tcp 00:25:18.350 15:12:33 -- nvmf/common.sh@662 -- # echo 4420 00:25:18.350 15:12:33 -- nvmf/common.sh@663 -- # echo ipv4 00:25:18.350 15:12:33 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:18.350 15:12:33 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -a 10.0.0.1 -t tcp -s 4420 00:25:18.350 00:25:18.350 Discovery Log Number of Records 2, Generation counter 2 00:25:18.350 =====Discovery Log Entry 0====== 00:25:18.350 trtype: tcp 00:25:18.350 adrfam: ipv4 00:25:18.350 subtype: current discovery subsystem 00:25:18.350 treq: not specified, sq flow control disable supported 00:25:18.350 portid: 1 00:25:18.350 trsvcid: 4420 00:25:18.350 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:18.350 traddr: 10.0.0.1 00:25:18.350 eflags: none 00:25:18.350 sectype: none 00:25:18.350 =====Discovery Log Entry 1====== 00:25:18.350 trtype: tcp 00:25:18.350 adrfam: ipv4 00:25:18.350 subtype: nvme subsystem 00:25:18.350 treq: not specified, sq flow control disable supported 00:25:18.350 portid: 1 00:25:18.350 trsvcid: 4420 00:25:18.350 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:18.350 traddr: 10.0.0.1 00:25:18.350 eflags: none 00:25:18.350 sectype: none 00:25:18.350 15:12:33 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:18.350 15:12:33 -- host/auth.sh@37 -- # echo 0 00:25:18.350 15:12:33 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:18.350 15:12:33 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:18.350 15:12:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:18.350 15:12:33 -- host/auth.sh@44 -- # digest=sha256 00:25:18.350 15:12:33 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.350 15:12:33 -- host/auth.sh@44 -- # keyid=1 00:25:18.350 15:12:33 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:18.350 15:12:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:18.350 15:12:33 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:18.350 15:12:34 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:18.350 15:12:34 -- host/auth.sh@100 -- # IFS=, 00:25:18.350 15:12:34 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:25:18.350 15:12:34 -- host/auth.sh@100 -- # IFS=, 00:25:18.350 15:12:34 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:18.350 15:12:34 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:18.350 15:12:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:18.350 15:12:34 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:25:18.350 15:12:34 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:18.350 15:12:34 -- host/auth.sh@68 -- # keyid=1 00:25:18.350 15:12:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:18.350 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.350 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:18.350 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.350 15:12:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:18.350 15:12:34 -- nvmf/common.sh@717 -- # local ip 00:25:18.350 15:12:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:18.350 15:12:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:18.350 15:12:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.350 15:12:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.350 15:12:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:18.350 15:12:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.350 15:12:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:18.350 15:12:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:18.350 15:12:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:18.350 15:12:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:18.350 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.350 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:18.609 nvme0n1 00:25:18.609 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.609 15:12:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.609 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.609 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:18.609 15:12:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:18.609 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.609 15:12:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.609 15:12:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.609 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.609 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:18.609 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.609 15:12:34 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:25:18.609 15:12:34 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.609 15:12:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:18.609 15:12:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:18.609 15:12:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:18.609 15:12:34 -- host/auth.sh@44 -- # digest=sha256 00:25:18.609 15:12:34 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.609 15:12:34 -- host/auth.sh@44 -- # keyid=0 00:25:18.609 15:12:34 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:18.609 15:12:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:18.609 15:12:34 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:18.609 15:12:34 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:18.609 15:12:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:25:18.609 15:12:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:18.609 15:12:34 -- host/auth.sh@68 -- # digest=sha256 00:25:18.609 15:12:34 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:18.609 15:12:34 -- host/auth.sh@68 -- # keyid=0 00:25:18.609 15:12:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:18.609 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.609 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:18.609 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.609 15:12:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:18.609 15:12:34 -- nvmf/common.sh@717 -- # local ip 00:25:18.609 15:12:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:18.609 15:12:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:18.609 15:12:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.609 15:12:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.609 15:12:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:18.609 15:12:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.609 15:12:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:18.609 15:12:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:18.609 15:12:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:18.609 15:12:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:18.609 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.609 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:18.870 nvme0n1 00:25:18.870 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.870 15:12:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.870 15:12:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:18.870 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.870 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:18.870 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.870 15:12:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.870 15:12:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.870 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.870 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:18.870 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.870 15:12:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:18.870 15:12:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:18.870 15:12:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:18.870 15:12:34 -- host/auth.sh@44 -- # digest=sha256 00:25:18.870 15:12:34 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.870 15:12:34 -- host/auth.sh@44 -- # keyid=1 00:25:18.870 15:12:34 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:18.870 15:12:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:18.870 15:12:34 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:18.870 15:12:34 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:18.870 15:12:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:25:18.870 15:12:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:18.870 15:12:34 -- host/auth.sh@68 -- # digest=sha256 00:25:18.870 15:12:34 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:18.870 15:12:34 -- host/auth.sh@68 -- # keyid=1 00:25:18.870 15:12:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:18.870 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.870 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:18.870 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.870 15:12:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:18.870 15:12:34 -- nvmf/common.sh@717 -- # local ip 00:25:18.870 15:12:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:18.870 15:12:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:18.870 15:12:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.870 15:12:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.870 15:12:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:18.870 15:12:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.870 15:12:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:18.870 15:12:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:18.870 15:12:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:18.870 15:12:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:18.870 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.870 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:18.870 nvme0n1 00:25:18.870 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.870 15:12:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.870 15:12:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:18.870 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.870 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:18.870 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.870 15:12:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.870 15:12:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.870 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.870 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:18.870 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:18.870 15:12:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:18.870 15:12:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:18.870 15:12:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:18.870 15:12:34 -- host/auth.sh@44 -- # digest=sha256 00:25:18.870 15:12:34 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:18.870 15:12:34 -- host/auth.sh@44 -- # keyid=2 00:25:18.870 15:12:34 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:18.870 15:12:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:18.870 15:12:34 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:18.870 15:12:34 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:18.870 15:12:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:25:18.870 15:12:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:18.870 15:12:34 -- host/auth.sh@68 -- # digest=sha256 00:25:18.870 15:12:34 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:18.870 15:12:34 -- host/auth.sh@68 -- # keyid=2 00:25:18.870 15:12:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:18.870 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:18.870 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:18.870 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.130 15:12:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:19.130 15:12:34 -- nvmf/common.sh@717 -- # local ip 00:25:19.130 15:12:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:19.130 15:12:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:19.130 15:12:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.130 15:12:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.130 15:12:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:19.130 15:12:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.130 15:12:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:19.130 15:12:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:19.130 15:12:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:19.130 15:12:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:19.130 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.130 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:19.130 nvme0n1 00:25:19.130 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.130 15:12:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.130 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.130 15:12:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:19.130 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:19.130 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.130 15:12:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.130 15:12:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.130 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.130 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:19.130 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.130 15:12:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:19.130 15:12:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:19.130 15:12:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:19.130 15:12:34 -- host/auth.sh@44 -- # digest=sha256 00:25:19.130 15:12:34 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.130 15:12:34 -- host/auth.sh@44 -- # keyid=3 00:25:19.130 15:12:34 -- host/auth.sh@45 -- # key=DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:19.130 15:12:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:19.130 15:12:34 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:19.130 15:12:34 -- host/auth.sh@49 -- # echo DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:19.130 15:12:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:25:19.130 15:12:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:19.130 15:12:34 -- host/auth.sh@68 -- # digest=sha256 00:25:19.130 15:12:34 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:19.130 15:12:34 -- host/auth.sh@68 -- # keyid=3 00:25:19.130 15:12:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.130 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.130 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:19.130 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.130 15:12:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:19.130 15:12:34 -- nvmf/common.sh@717 -- # local ip 00:25:19.130 15:12:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:19.130 15:12:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:19.130 15:12:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.130 15:12:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.130 15:12:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:19.130 15:12:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.130 15:12:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:19.130 15:12:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:19.130 15:12:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:19.130 15:12:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:19.130 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.130 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:19.389 nvme0n1 00:25:19.389 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.389 15:12:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.389 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.389 15:12:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:19.389 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:19.389 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.389 15:12:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.389 15:12:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.389 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.389 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:19.389 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.389 15:12:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:19.389 15:12:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:19.389 15:12:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:19.389 15:12:34 -- host/auth.sh@44 -- # digest=sha256 00:25:19.389 15:12:34 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.389 15:12:34 -- host/auth.sh@44 -- # keyid=4 00:25:19.389 15:12:34 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:19.389 15:12:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:19.389 15:12:34 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:19.389 15:12:34 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:19.389 15:12:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:25:19.389 15:12:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:19.389 15:12:34 -- host/auth.sh@68 -- # digest=sha256 00:25:19.389 15:12:34 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:19.389 15:12:34 -- host/auth.sh@68 -- # keyid=4 00:25:19.389 15:12:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.389 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.389 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:19.389 15:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.389 15:12:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:19.389 15:12:34 -- nvmf/common.sh@717 -- # local ip 00:25:19.389 15:12:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:19.389 15:12:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:19.389 15:12:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.389 15:12:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.389 15:12:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:19.389 15:12:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.389 15:12:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:19.389 15:12:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:19.389 15:12:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:19.389 15:12:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.389 15:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.389 15:12:34 -- common/autotest_common.sh@10 -- # set +x 00:25:19.389 nvme0n1 00:25:19.389 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.389 15:12:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.389 15:12:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:19.389 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.389 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:19.648 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.648 15:12:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.648 15:12:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.648 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.648 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:19.649 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.649 15:12:35 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:19.649 15:12:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:19.649 15:12:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:19.649 15:12:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:19.649 15:12:35 -- host/auth.sh@44 -- # digest=sha256 00:25:19.649 15:12:35 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:19.649 15:12:35 -- host/auth.sh@44 -- # keyid=0 00:25:19.649 15:12:35 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:19.649 15:12:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:19.649 15:12:35 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:19.908 15:12:35 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:19.908 15:12:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:25:19.908 15:12:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:19.908 15:12:35 -- host/auth.sh@68 -- # digest=sha256 00:25:19.908 15:12:35 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:19.908 15:12:35 -- host/auth.sh@68 -- # keyid=0 00:25:19.908 15:12:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:19.908 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.908 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:19.908 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.908 15:12:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:19.908 15:12:35 -- nvmf/common.sh@717 -- # local ip 00:25:19.908 15:12:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:19.908 15:12:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:19.908 15:12:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.908 15:12:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.908 15:12:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:19.908 15:12:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.908 15:12:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:19.908 15:12:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:19.908 15:12:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:19.908 15:12:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:19.908 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.908 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:19.908 nvme0n1 00:25:19.908 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.908 15:12:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.908 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.908 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:19.908 15:12:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:19.908 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.908 15:12:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.908 15:12:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.908 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.908 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:19.908 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.908 15:12:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:19.908 15:12:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:19.908 15:12:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:19.908 15:12:35 -- host/auth.sh@44 -- # digest=sha256 00:25:19.908 15:12:35 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:19.908 15:12:35 -- host/auth.sh@44 -- # keyid=1 00:25:19.908 15:12:35 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:19.908 15:12:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:19.908 15:12:35 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:19.908 15:12:35 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:19.908 15:12:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:25:19.908 15:12:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:19.908 15:12:35 -- host/auth.sh@68 -- # digest=sha256 00:25:19.908 15:12:35 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:19.908 15:12:35 -- host/auth.sh@68 -- # keyid=1 00:25:19.908 15:12:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:19.908 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.908 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:19.908 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.908 15:12:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:19.908 15:12:35 -- nvmf/common.sh@717 -- # local ip 00:25:19.908 15:12:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:19.908 15:12:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:19.908 15:12:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.908 15:12:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.908 15:12:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:19.908 15:12:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.908 15:12:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:19.908 15:12:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:19.908 15:12:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:19.908 15:12:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:19.909 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.909 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:20.168 nvme0n1 00:25:20.168 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.168 15:12:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.168 15:12:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:20.168 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.168 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:20.168 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.168 15:12:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.168 15:12:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.168 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.168 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:20.168 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.168 15:12:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:20.168 15:12:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:20.168 15:12:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:20.168 15:12:35 -- host/auth.sh@44 -- # digest=sha256 00:25:20.168 15:12:35 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.168 15:12:35 -- host/auth.sh@44 -- # keyid=2 00:25:20.168 15:12:35 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:20.168 15:12:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:20.168 15:12:35 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:20.168 15:12:35 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:20.168 15:12:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:25:20.168 15:12:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:20.168 15:12:35 -- host/auth.sh@68 -- # digest=sha256 00:25:20.168 15:12:35 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:20.168 15:12:35 -- host/auth.sh@68 -- # keyid=2 00:25:20.168 15:12:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.168 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.168 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:20.168 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.168 15:12:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:20.168 15:12:35 -- nvmf/common.sh@717 -- # local ip 00:25:20.168 15:12:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:20.168 15:12:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:20.168 15:12:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.168 15:12:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.168 15:12:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:20.168 15:12:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.168 15:12:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:20.168 15:12:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:20.168 15:12:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:20.168 15:12:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:20.168 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.168 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:20.474 nvme0n1 00:25:20.474 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.474 15:12:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.474 15:12:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:20.474 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.474 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:20.474 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.474 15:12:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.474 15:12:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.474 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.474 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:20.474 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.474 15:12:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:20.474 15:12:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:20.474 15:12:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:20.474 15:12:35 -- host/auth.sh@44 -- # digest=sha256 00:25:20.474 15:12:35 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.474 15:12:35 -- host/auth.sh@44 -- # keyid=3 00:25:20.474 15:12:35 -- host/auth.sh@45 -- # key=DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:20.474 15:12:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:20.474 15:12:35 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:20.474 15:12:35 -- host/auth.sh@49 -- # echo DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:20.474 15:12:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:25:20.474 15:12:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:20.474 15:12:35 -- host/auth.sh@68 -- # digest=sha256 00:25:20.474 15:12:35 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:20.474 15:12:35 -- host/auth.sh@68 -- # keyid=3 00:25:20.474 15:12:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.474 15:12:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.474 15:12:35 -- common/autotest_common.sh@10 -- # set +x 00:25:20.474 15:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.474 15:12:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:20.474 15:12:35 -- nvmf/common.sh@717 -- # local ip 00:25:20.474 15:12:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:20.474 15:12:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:20.474 15:12:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.474 15:12:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.474 15:12:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:20.474 15:12:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.474 15:12:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:20.474 15:12:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:20.474 15:12:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:20.474 15:12:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:20.474 15:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.474 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:25:20.474 nvme0n1 00:25:20.474 15:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.474 15:12:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.474 15:12:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:20.474 15:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.474 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:25:20.474 15:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.474 15:12:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.474 15:12:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.474 15:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.474 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:25:20.746 15:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.746 15:12:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:20.746 15:12:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:20.747 15:12:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:20.747 15:12:36 -- host/auth.sh@44 -- # digest=sha256 00:25:20.747 15:12:36 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:20.747 15:12:36 -- host/auth.sh@44 -- # keyid=4 00:25:20.747 15:12:36 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:20.747 15:12:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:20.747 15:12:36 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:20.747 15:12:36 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:20.747 15:12:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:25:20.747 15:12:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:20.747 15:12:36 -- host/auth.sh@68 -- # digest=sha256 00:25:20.747 15:12:36 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:20.747 15:12:36 -- host/auth.sh@68 -- # keyid=4 00:25:20.747 15:12:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:20.747 15:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.747 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:25:20.747 15:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.747 15:12:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:20.747 15:12:36 -- nvmf/common.sh@717 -- # local ip 00:25:20.747 15:12:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:20.747 15:12:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:20.747 15:12:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.747 15:12:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.747 15:12:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:20.747 15:12:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.747 15:12:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:20.747 15:12:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:20.747 15:12:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:20.747 15:12:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.747 15:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.747 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:25:20.747 nvme0n1 00:25:20.747 15:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.747 15:12:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.747 15:12:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:20.747 15:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.747 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:25:20.747 15:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.747 15:12:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.747 15:12:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.747 15:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.747 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:25:20.747 15:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.747 15:12:36 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.747 15:12:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:20.747 15:12:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:20.747 15:12:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:20.747 15:12:36 -- host/auth.sh@44 -- # digest=sha256 00:25:20.747 15:12:36 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:20.747 15:12:36 -- host/auth.sh@44 -- # keyid=0 00:25:20.747 15:12:36 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:20.747 15:12:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:20.747 15:12:36 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:21.314 15:12:36 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:21.314 15:12:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:25:21.315 15:12:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:21.315 15:12:36 -- host/auth.sh@68 -- # digest=sha256 00:25:21.315 15:12:36 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:21.315 15:12:36 -- host/auth.sh@68 -- # keyid=0 00:25:21.315 15:12:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:21.315 15:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.315 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:25:21.315 15:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.315 15:12:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:21.315 15:12:36 -- nvmf/common.sh@717 -- # local ip 00:25:21.315 15:12:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:21.315 15:12:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:21.315 15:12:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.315 15:12:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.315 15:12:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:21.315 15:12:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.315 15:12:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:21.315 15:12:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:21.315 15:12:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:21.315 15:12:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:21.315 15:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.315 15:12:36 -- common/autotest_common.sh@10 -- # set +x 00:25:21.578 nvme0n1 00:25:21.578 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.578 15:12:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.578 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.578 15:12:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:21.578 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:21.578 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.578 15:12:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.578 15:12:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.578 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.578 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:21.578 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.578 15:12:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:21.578 15:12:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:21.578 15:12:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:21.578 15:12:37 -- host/auth.sh@44 -- # digest=sha256 00:25:21.578 15:12:37 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.578 15:12:37 -- host/auth.sh@44 -- # keyid=1 00:25:21.578 15:12:37 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:21.578 15:12:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:21.578 15:12:37 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:21.578 15:12:37 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:21.578 15:12:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:25:21.578 15:12:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:21.578 15:12:37 -- host/auth.sh@68 -- # digest=sha256 00:25:21.578 15:12:37 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:21.578 15:12:37 -- host/auth.sh@68 -- # keyid=1 00:25:21.578 15:12:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:21.578 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.578 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:21.578 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.579 15:12:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:21.579 15:12:37 -- nvmf/common.sh@717 -- # local ip 00:25:21.579 15:12:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:21.579 15:12:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:21.579 15:12:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.579 15:12:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.579 15:12:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:21.579 15:12:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.579 15:12:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:21.579 15:12:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:21.579 15:12:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:21.579 15:12:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:21.579 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.579 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:21.837 nvme0n1 00:25:21.837 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.837 15:12:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.837 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.837 15:12:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:21.837 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:21.837 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.837 15:12:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.837 15:12:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.837 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.837 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:21.837 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.837 15:12:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:21.837 15:12:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:21.837 15:12:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:21.837 15:12:37 -- host/auth.sh@44 -- # digest=sha256 00:25:21.837 15:12:37 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:21.837 15:12:37 -- host/auth.sh@44 -- # keyid=2 00:25:21.837 15:12:37 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:21.837 15:12:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:21.837 15:12:37 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:21.837 15:12:37 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:21.837 15:12:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:25:21.837 15:12:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:21.837 15:12:37 -- host/auth.sh@68 -- # digest=sha256 00:25:21.837 15:12:37 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:21.837 15:12:37 -- host/auth.sh@68 -- # keyid=2 00:25:21.837 15:12:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:21.837 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.837 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:21.837 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.837 15:12:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:21.837 15:12:37 -- nvmf/common.sh@717 -- # local ip 00:25:21.837 15:12:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:21.837 15:12:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:21.837 15:12:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.837 15:12:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.837 15:12:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:21.837 15:12:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.837 15:12:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:21.837 15:12:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:21.837 15:12:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:21.837 15:12:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:21.837 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.837 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:22.096 nvme0n1 00:25:22.096 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.096 15:12:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.096 15:12:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:22.096 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.096 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:22.096 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.096 15:12:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.096 15:12:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.096 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.096 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:22.096 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.096 15:12:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:22.096 15:12:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:22.096 15:12:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:22.096 15:12:37 -- host/auth.sh@44 -- # digest=sha256 00:25:22.096 15:12:37 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.096 15:12:37 -- host/auth.sh@44 -- # keyid=3 00:25:22.096 15:12:37 -- host/auth.sh@45 -- # key=DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:22.096 15:12:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:22.096 15:12:37 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:22.096 15:12:37 -- host/auth.sh@49 -- # echo DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:22.096 15:12:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:25:22.096 15:12:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:22.096 15:12:37 -- host/auth.sh@68 -- # digest=sha256 00:25:22.096 15:12:37 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:22.096 15:12:37 -- host/auth.sh@68 -- # keyid=3 00:25:22.096 15:12:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:22.096 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.096 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:22.096 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.096 15:12:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:22.096 15:12:37 -- nvmf/common.sh@717 -- # local ip 00:25:22.096 15:12:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:22.096 15:12:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:22.096 15:12:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.096 15:12:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.096 15:12:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:22.096 15:12:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.096 15:12:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:22.096 15:12:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:22.096 15:12:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:22.096 15:12:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:22.096 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.096 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:22.355 nvme0n1 00:25:22.355 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.355 15:12:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.355 15:12:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:22.356 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.356 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:22.356 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.356 15:12:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.356 15:12:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.356 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.356 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:22.356 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.356 15:12:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:22.356 15:12:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:22.356 15:12:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:22.356 15:12:37 -- host/auth.sh@44 -- # digest=sha256 00:25:22.356 15:12:37 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.356 15:12:37 -- host/auth.sh@44 -- # keyid=4 00:25:22.356 15:12:37 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:22.356 15:12:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:22.356 15:12:37 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:22.356 15:12:37 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:22.356 15:12:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:25:22.356 15:12:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:22.356 15:12:37 -- host/auth.sh@68 -- # digest=sha256 00:25:22.356 15:12:37 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:22.356 15:12:37 -- host/auth.sh@68 -- # keyid=4 00:25:22.356 15:12:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:22.356 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.356 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:22.356 15:12:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.356 15:12:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:22.356 15:12:37 -- nvmf/common.sh@717 -- # local ip 00:25:22.356 15:12:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:22.356 15:12:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:22.356 15:12:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.356 15:12:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.356 15:12:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:22.356 15:12:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.356 15:12:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:22.356 15:12:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:22.356 15:12:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:22.356 15:12:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.356 15:12:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.356 15:12:37 -- common/autotest_common.sh@10 -- # set +x 00:25:22.615 nvme0n1 00:25:22.615 15:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.615 15:12:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:22.615 15:12:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.615 15:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.615 15:12:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.615 15:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.615 15:12:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.615 15:12:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.615 15:12:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.615 15:12:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.615 15:12:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.615 15:12:38 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.615 15:12:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:22.615 15:12:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:22.615 15:12:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:22.615 15:12:38 -- host/auth.sh@44 -- # digest=sha256 00:25:22.615 15:12:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:22.615 15:12:38 -- host/auth.sh@44 -- # keyid=0 00:25:22.615 15:12:38 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:22.615 15:12:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:22.615 15:12:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:24.032 15:12:39 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:24.032 15:12:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:25:24.032 15:12:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:24.032 15:12:39 -- host/auth.sh@68 -- # digest=sha256 00:25:24.032 15:12:39 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:24.032 15:12:39 -- host/auth.sh@68 -- # keyid=0 00:25:24.032 15:12:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:24.032 15:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.032 15:12:39 -- common/autotest_common.sh@10 -- # set +x 00:25:24.032 15:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.032 15:12:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:24.032 15:12:39 -- nvmf/common.sh@717 -- # local ip 00:25:24.032 15:12:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:24.032 15:12:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:24.032 15:12:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.032 15:12:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.032 15:12:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:24.032 15:12:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.032 15:12:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:24.032 15:12:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:24.032 15:12:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:24.032 15:12:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:24.032 15:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.032 15:12:39 -- common/autotest_common.sh@10 -- # set +x 00:25:24.306 nvme0n1 00:25:24.306 15:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.306 15:12:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.306 15:12:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.306 15:12:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:24.306 15:12:39 -- common/autotest_common.sh@10 -- # set +x 00:25:24.306 15:12:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.575 15:12:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.576 15:12:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.576 15:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.576 15:12:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.576 15:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.576 15:12:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:24.576 15:12:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:24.576 15:12:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:24.576 15:12:40 -- host/auth.sh@44 -- # digest=sha256 00:25:24.576 15:12:40 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.576 15:12:40 -- host/auth.sh@44 -- # keyid=1 00:25:24.576 15:12:40 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:24.576 15:12:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:24.576 15:12:40 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:24.576 15:12:40 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:24.576 15:12:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:25:24.576 15:12:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:24.576 15:12:40 -- host/auth.sh@68 -- # digest=sha256 00:25:24.576 15:12:40 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:24.576 15:12:40 -- host/auth.sh@68 -- # keyid=1 00:25:24.576 15:12:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:24.576 15:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.576 15:12:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.576 15:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.576 15:12:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:24.576 15:12:40 -- nvmf/common.sh@717 -- # local ip 00:25:24.576 15:12:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:24.576 15:12:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:24.576 15:12:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.576 15:12:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.576 15:12:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:24.576 15:12:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.576 15:12:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:24.576 15:12:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:24.576 15:12:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:24.576 15:12:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:24.576 15:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.576 15:12:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.836 nvme0n1 00:25:24.836 15:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.836 15:12:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.836 15:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.836 15:12:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.836 15:12:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:24.836 15:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.836 15:12:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.836 15:12:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.836 15:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.836 15:12:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.836 15:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.836 15:12:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:24.836 15:12:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:24.836 15:12:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:24.836 15:12:40 -- host/auth.sh@44 -- # digest=sha256 00:25:24.836 15:12:40 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.836 15:12:40 -- host/auth.sh@44 -- # keyid=2 00:25:24.836 15:12:40 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:24.836 15:12:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:24.836 15:12:40 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:24.836 15:12:40 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:24.836 15:12:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:25:24.836 15:12:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:24.836 15:12:40 -- host/auth.sh@68 -- # digest=sha256 00:25:24.836 15:12:40 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:24.836 15:12:40 -- host/auth.sh@68 -- # keyid=2 00:25:24.836 15:12:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:24.836 15:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.836 15:12:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.836 15:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.836 15:12:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:24.836 15:12:40 -- nvmf/common.sh@717 -- # local ip 00:25:24.836 15:12:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:24.836 15:12:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:24.836 15:12:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.836 15:12:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.836 15:12:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:24.836 15:12:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.836 15:12:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:24.836 15:12:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:24.836 15:12:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:24.836 15:12:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:24.836 15:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.836 15:12:40 -- common/autotest_common.sh@10 -- # set +x 00:25:25.095 nvme0n1 00:25:25.095 15:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.095 15:12:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.095 15:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.095 15:12:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:25.095 15:12:40 -- common/autotest_common.sh@10 -- # set +x 00:25:25.095 15:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.095 15:12:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.095 15:12:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.095 15:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.095 15:12:40 -- common/autotest_common.sh@10 -- # set +x 00:25:25.355 15:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.355 15:12:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:25.355 15:12:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:25.355 15:12:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:25.355 15:12:40 -- host/auth.sh@44 -- # digest=sha256 00:25:25.355 15:12:40 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.355 15:12:40 -- host/auth.sh@44 -- # keyid=3 00:25:25.355 15:12:40 -- host/auth.sh@45 -- # key=DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:25.355 15:12:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:25.355 15:12:40 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:25.355 15:12:40 -- host/auth.sh@49 -- # echo DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:25.355 15:12:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:25:25.355 15:12:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:25.355 15:12:40 -- host/auth.sh@68 -- # digest=sha256 00:25:25.355 15:12:40 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:25.355 15:12:40 -- host/auth.sh@68 -- # keyid=3 00:25:25.355 15:12:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:25.355 15:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.355 15:12:40 -- common/autotest_common.sh@10 -- # set +x 00:25:25.355 15:12:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.355 15:12:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:25.355 15:12:40 -- nvmf/common.sh@717 -- # local ip 00:25:25.355 15:12:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:25.355 15:12:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:25.355 15:12:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.355 15:12:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.355 15:12:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:25.355 15:12:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.355 15:12:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:25.355 15:12:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:25.355 15:12:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:25.355 15:12:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:25.355 15:12:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.355 15:12:40 -- common/autotest_common.sh@10 -- # set +x 00:25:25.614 nvme0n1 00:25:25.615 15:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.615 15:12:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:25.615 15:12:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.615 15:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.615 15:12:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.615 15:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.615 15:12:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.615 15:12:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.615 15:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.615 15:12:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.615 15:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.615 15:12:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:25.615 15:12:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:25.615 15:12:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:25.615 15:12:41 -- host/auth.sh@44 -- # digest=sha256 00:25:25.615 15:12:41 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.615 15:12:41 -- host/auth.sh@44 -- # keyid=4 00:25:25.615 15:12:41 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:25.615 15:12:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:25.615 15:12:41 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:25.615 15:12:41 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:25.615 15:12:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:25:25.615 15:12:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:25.615 15:12:41 -- host/auth.sh@68 -- # digest=sha256 00:25:25.615 15:12:41 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:25.615 15:12:41 -- host/auth.sh@68 -- # keyid=4 00:25:25.615 15:12:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:25.615 15:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.615 15:12:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.615 15:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.615 15:12:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:25.615 15:12:41 -- nvmf/common.sh@717 -- # local ip 00:25:25.615 15:12:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:25.615 15:12:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:25.615 15:12:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.615 15:12:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.615 15:12:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:25.615 15:12:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.615 15:12:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:25.615 15:12:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:25.615 15:12:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:25.615 15:12:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.615 15:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.615 15:12:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.874 nvme0n1 00:25:25.874 15:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.874 15:12:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:25.874 15:12:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.874 15:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.874 15:12:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.874 15:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.134 15:12:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.134 15:12:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.134 15:12:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.134 15:12:41 -- common/autotest_common.sh@10 -- # set +x 00:25:26.134 15:12:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.134 15:12:41 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.134 15:12:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:26.134 15:12:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:26.134 15:12:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:26.134 15:12:41 -- host/auth.sh@44 -- # digest=sha256 00:25:26.134 15:12:41 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.134 15:12:41 -- host/auth.sh@44 -- # keyid=0 00:25:26.134 15:12:41 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:26.134 15:12:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:26.134 15:12:41 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:29.429 15:12:44 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:29.429 15:12:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:25:29.429 15:12:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:29.429 15:12:44 -- host/auth.sh@68 -- # digest=sha256 00:25:29.429 15:12:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:29.429 15:12:44 -- host/auth.sh@68 -- # keyid=0 00:25:29.429 15:12:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:29.429 15:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.429 15:12:44 -- common/autotest_common.sh@10 -- # set +x 00:25:29.429 15:12:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.429 15:12:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:29.429 15:12:44 -- nvmf/common.sh@717 -- # local ip 00:25:29.429 15:12:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:29.429 15:12:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:29.429 15:12:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.429 15:12:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.429 15:12:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:29.429 15:12:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.429 15:12:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:29.429 15:12:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:29.429 15:12:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:29.429 15:12:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:29.429 15:12:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.429 15:12:44 -- common/autotest_common.sh@10 -- # set +x 00:25:29.429 nvme0n1 00:25:29.429 15:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.429 15:12:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.430 15:12:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:29.430 15:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.430 15:12:45 -- common/autotest_common.sh@10 -- # set +x 00:25:29.692 15:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.692 15:12:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.692 15:12:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.692 15:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.692 15:12:45 -- common/autotest_common.sh@10 -- # set +x 00:25:29.692 15:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.692 15:12:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:29.692 15:12:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:29.692 15:12:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:29.692 15:12:45 -- host/auth.sh@44 -- # digest=sha256 00:25:29.692 15:12:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.692 15:12:45 -- host/auth.sh@44 -- # keyid=1 00:25:29.692 15:12:45 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:29.692 15:12:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:29.692 15:12:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:29.692 15:12:45 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:29.692 15:12:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:25:29.692 15:12:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:29.692 15:12:45 -- host/auth.sh@68 -- # digest=sha256 00:25:29.692 15:12:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:29.692 15:12:45 -- host/auth.sh@68 -- # keyid=1 00:25:29.692 15:12:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:29.692 15:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.692 15:12:45 -- common/autotest_common.sh@10 -- # set +x 00:25:29.692 15:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.692 15:12:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:29.692 15:12:45 -- nvmf/common.sh@717 -- # local ip 00:25:29.692 15:12:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:29.692 15:12:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:29.692 15:12:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.692 15:12:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.692 15:12:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:29.692 15:12:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.692 15:12:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:29.692 15:12:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:29.692 15:12:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:29.692 15:12:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:29.692 15:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.692 15:12:45 -- common/autotest_common.sh@10 -- # set +x 00:25:30.260 nvme0n1 00:25:30.260 15:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.260 15:12:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.260 15:12:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:30.260 15:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.260 15:12:45 -- common/autotest_common.sh@10 -- # set +x 00:25:30.260 15:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.260 15:12:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.260 15:12:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.260 15:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.260 15:12:45 -- common/autotest_common.sh@10 -- # set +x 00:25:30.260 15:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.260 15:12:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:30.260 15:12:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:30.260 15:12:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:30.260 15:12:45 -- host/auth.sh@44 -- # digest=sha256 00:25:30.260 15:12:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.260 15:12:45 -- host/auth.sh@44 -- # keyid=2 00:25:30.260 15:12:45 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:30.260 15:12:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:30.260 15:12:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:30.260 15:12:45 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:30.260 15:12:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:25:30.260 15:12:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:30.260 15:12:45 -- host/auth.sh@68 -- # digest=sha256 00:25:30.260 15:12:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:30.260 15:12:45 -- host/auth.sh@68 -- # keyid=2 00:25:30.260 15:12:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:30.260 15:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.260 15:12:45 -- common/autotest_common.sh@10 -- # set +x 00:25:30.260 15:12:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.260 15:12:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:30.260 15:12:45 -- nvmf/common.sh@717 -- # local ip 00:25:30.260 15:12:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:30.261 15:12:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:30.261 15:12:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.261 15:12:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.261 15:12:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:30.261 15:12:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.261 15:12:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:30.261 15:12:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:30.261 15:12:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:30.261 15:12:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:30.261 15:12:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.261 15:12:45 -- common/autotest_common.sh@10 -- # set +x 00:25:30.837 nvme0n1 00:25:30.837 15:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.837 15:12:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.837 15:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.837 15:12:46 -- common/autotest_common.sh@10 -- # set +x 00:25:30.837 15:12:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:30.837 15:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.837 15:12:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.837 15:12:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.837 15:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.837 15:12:46 -- common/autotest_common.sh@10 -- # set +x 00:25:30.837 15:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.837 15:12:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:30.837 15:12:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:30.837 15:12:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:30.837 15:12:46 -- host/auth.sh@44 -- # digest=sha256 00:25:30.837 15:12:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.837 15:12:46 -- host/auth.sh@44 -- # keyid=3 00:25:30.837 15:12:46 -- host/auth.sh@45 -- # key=DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:30.837 15:12:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:30.837 15:12:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:30.837 15:12:46 -- host/auth.sh@49 -- # echo DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:30.837 15:12:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:25:30.837 15:12:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:30.837 15:12:46 -- host/auth.sh@68 -- # digest=sha256 00:25:30.837 15:12:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:30.837 15:12:46 -- host/auth.sh@68 -- # keyid=3 00:25:30.837 15:12:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:30.837 15:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.837 15:12:46 -- common/autotest_common.sh@10 -- # set +x 00:25:30.837 15:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.837 15:12:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:30.837 15:12:46 -- nvmf/common.sh@717 -- # local ip 00:25:30.837 15:12:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:30.837 15:12:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:30.837 15:12:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.837 15:12:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.837 15:12:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:30.837 15:12:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.837 15:12:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:30.837 15:12:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:30.837 15:12:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:30.837 15:12:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:30.837 15:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.837 15:12:46 -- common/autotest_common.sh@10 -- # set +x 00:25:31.407 nvme0n1 00:25:31.407 15:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.407 15:12:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.407 15:12:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:31.407 15:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.407 15:12:46 -- common/autotest_common.sh@10 -- # set +x 00:25:31.407 15:12:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.407 15:12:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.407 15:12:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.407 15:12:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.407 15:12:46 -- common/autotest_common.sh@10 -- # set +x 00:25:31.407 15:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.407 15:12:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:31.407 15:12:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:31.407 15:12:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:31.407 15:12:47 -- host/auth.sh@44 -- # digest=sha256 00:25:31.407 15:12:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.407 15:12:47 -- host/auth.sh@44 -- # keyid=4 00:25:31.407 15:12:47 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:31.407 15:12:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:31.407 15:12:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:31.407 15:12:47 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:31.407 15:12:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:25:31.407 15:12:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:31.407 15:12:47 -- host/auth.sh@68 -- # digest=sha256 00:25:31.407 15:12:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:31.407 15:12:47 -- host/auth.sh@68 -- # keyid=4 00:25:31.407 15:12:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:31.407 15:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.407 15:12:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.407 15:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.407 15:12:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:31.407 15:12:47 -- nvmf/common.sh@717 -- # local ip 00:25:31.407 15:12:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:31.407 15:12:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:31.407 15:12:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.407 15:12:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.407 15:12:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:31.407 15:12:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.407 15:12:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:31.407 15:12:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:31.407 15:12:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:31.407 15:12:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.407 15:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.407 15:12:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.975 nvme0n1 00:25:31.975 15:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.975 15:12:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.975 15:12:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:31.975 15:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.975 15:12:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.975 15:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.975 15:12:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.975 15:12:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.975 15:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.975 15:12:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.975 15:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.975 15:12:47 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:25:31.975 15:12:47 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.975 15:12:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:31.975 15:12:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:31.975 15:12:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:31.975 15:12:47 -- host/auth.sh@44 -- # digest=sha384 00:25:31.975 15:12:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.975 15:12:47 -- host/auth.sh@44 -- # keyid=0 00:25:31.975 15:12:47 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:31.975 15:12:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:31.975 15:12:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:31.975 15:12:47 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:31.975 15:12:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:25:31.975 15:12:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:31.975 15:12:47 -- host/auth.sh@68 -- # digest=sha384 00:25:31.975 15:12:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:31.975 15:12:47 -- host/auth.sh@68 -- # keyid=0 00:25:31.975 15:12:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:31.975 15:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.975 15:12:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.975 15:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.975 15:12:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:31.975 15:12:47 -- nvmf/common.sh@717 -- # local ip 00:25:31.975 15:12:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:31.975 15:12:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:31.975 15:12:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.975 15:12:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.975 15:12:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:31.975 15:12:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.975 15:12:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:31.975 15:12:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:31.975 15:12:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:31.975 15:12:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:31.975 15:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.975 15:12:47 -- common/autotest_common.sh@10 -- # set +x 00:25:32.235 nvme0n1 00:25:32.235 15:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.235 15:12:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.235 15:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.235 15:12:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:32.235 15:12:47 -- common/autotest_common.sh@10 -- # set +x 00:25:32.235 15:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.235 15:12:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.235 15:12:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.235 15:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.235 15:12:47 -- common/autotest_common.sh@10 -- # set +x 00:25:32.235 15:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.235 15:12:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:32.235 15:12:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:32.235 15:12:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:32.235 15:12:47 -- host/auth.sh@44 -- # digest=sha384 00:25:32.235 15:12:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.235 15:12:47 -- host/auth.sh@44 -- # keyid=1 00:25:32.235 15:12:47 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:32.235 15:12:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:32.235 15:12:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:32.235 15:12:47 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:32.235 15:12:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:25:32.235 15:12:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:32.235 15:12:47 -- host/auth.sh@68 -- # digest=sha384 00:25:32.235 15:12:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:32.235 15:12:47 -- host/auth.sh@68 -- # keyid=1 00:25:32.235 15:12:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.235 15:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.235 15:12:47 -- common/autotest_common.sh@10 -- # set +x 00:25:32.235 15:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.235 15:12:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:32.235 15:12:47 -- nvmf/common.sh@717 -- # local ip 00:25:32.235 15:12:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:32.235 15:12:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:32.235 15:12:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.235 15:12:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.235 15:12:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:32.235 15:12:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.235 15:12:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:32.235 15:12:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:32.235 15:12:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:32.235 15:12:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:32.235 15:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.235 15:12:47 -- common/autotest_common.sh@10 -- # set +x 00:25:32.235 nvme0n1 00:25:32.235 15:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.235 15:12:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:32.235 15:12:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.235 15:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.235 15:12:47 -- common/autotest_common.sh@10 -- # set +x 00:25:32.495 15:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.495 15:12:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.495 15:12:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.495 15:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.495 15:12:47 -- common/autotest_common.sh@10 -- # set +x 00:25:32.495 15:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.495 15:12:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:32.495 15:12:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:32.495 15:12:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:32.495 15:12:47 -- host/auth.sh@44 -- # digest=sha384 00:25:32.495 15:12:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.495 15:12:47 -- host/auth.sh@44 -- # keyid=2 00:25:32.495 15:12:47 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:32.495 15:12:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:32.495 15:12:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:32.495 15:12:47 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:32.495 15:12:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:25:32.495 15:12:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:32.495 15:12:47 -- host/auth.sh@68 -- # digest=sha384 00:25:32.495 15:12:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:32.495 15:12:47 -- host/auth.sh@68 -- # keyid=2 00:25:32.495 15:12:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.495 15:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.495 15:12:47 -- common/autotest_common.sh@10 -- # set +x 00:25:32.495 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.495 15:12:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:32.495 15:12:48 -- nvmf/common.sh@717 -- # local ip 00:25:32.495 15:12:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:32.495 15:12:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:32.495 15:12:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.495 15:12:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.495 15:12:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:32.495 15:12:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.495 15:12:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:32.495 15:12:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:32.495 15:12:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:32.495 15:12:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:32.495 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.495 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:32.495 nvme0n1 00:25:32.495 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.495 15:12:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.495 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.495 15:12:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:32.495 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:32.495 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.495 15:12:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.495 15:12:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.495 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.495 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:32.495 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.495 15:12:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:32.495 15:12:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:32.495 15:12:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:32.495 15:12:48 -- host/auth.sh@44 -- # digest=sha384 00:25:32.495 15:12:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.495 15:12:48 -- host/auth.sh@44 -- # keyid=3 00:25:32.495 15:12:48 -- host/auth.sh@45 -- # key=DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:32.495 15:12:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:32.495 15:12:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:32.495 15:12:48 -- host/auth.sh@49 -- # echo DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:32.495 15:12:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:25:32.495 15:12:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:32.495 15:12:48 -- host/auth.sh@68 -- # digest=sha384 00:25:32.495 15:12:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:32.495 15:12:48 -- host/auth.sh@68 -- # keyid=3 00:25:32.495 15:12:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.495 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.495 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:32.495 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.495 15:12:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:32.495 15:12:48 -- nvmf/common.sh@717 -- # local ip 00:25:32.495 15:12:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:32.495 15:12:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:32.495 15:12:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.495 15:12:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.495 15:12:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:32.495 15:12:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.495 15:12:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:32.495 15:12:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:32.495 15:12:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:32.495 15:12:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:32.495 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.495 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:32.754 nvme0n1 00:25:32.754 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.754 15:12:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.754 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.754 15:12:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:32.754 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:32.754 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.754 15:12:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.754 15:12:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.754 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.754 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:32.754 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.754 15:12:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:32.754 15:12:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:32.754 15:12:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:32.754 15:12:48 -- host/auth.sh@44 -- # digest=sha384 00:25:32.754 15:12:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.754 15:12:48 -- host/auth.sh@44 -- # keyid=4 00:25:32.754 15:12:48 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:32.754 15:12:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:32.754 15:12:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:32.754 15:12:48 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:32.754 15:12:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:25:32.754 15:12:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:32.754 15:12:48 -- host/auth.sh@68 -- # digest=sha384 00:25:32.754 15:12:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:32.754 15:12:48 -- host/auth.sh@68 -- # keyid=4 00:25:32.754 15:12:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.754 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.754 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:32.754 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.754 15:12:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:32.754 15:12:48 -- nvmf/common.sh@717 -- # local ip 00:25:32.754 15:12:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:32.754 15:12:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:32.754 15:12:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.754 15:12:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.754 15:12:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:32.754 15:12:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.754 15:12:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:32.754 15:12:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:32.754 15:12:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:32.754 15:12:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.754 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.754 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:32.754 nvme0n1 00:25:32.754 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.754 15:12:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.754 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.754 15:12:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:32.754 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:33.014 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.014 15:12:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.014 15:12:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.014 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.014 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:33.014 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.014 15:12:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.014 15:12:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:33.014 15:12:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:33.014 15:12:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:33.014 15:12:48 -- host/auth.sh@44 -- # digest=sha384 00:25:33.014 15:12:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.014 15:12:48 -- host/auth.sh@44 -- # keyid=0 00:25:33.014 15:12:48 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:33.014 15:12:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:33.014 15:12:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:33.014 15:12:48 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:33.014 15:12:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:25:33.014 15:12:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:33.014 15:12:48 -- host/auth.sh@68 -- # digest=sha384 00:25:33.014 15:12:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:33.014 15:12:48 -- host/auth.sh@68 -- # keyid=0 00:25:33.014 15:12:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.014 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.014 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:33.014 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.014 15:12:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:33.014 15:12:48 -- nvmf/common.sh@717 -- # local ip 00:25:33.014 15:12:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:33.014 15:12:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:33.014 15:12:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.014 15:12:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.014 15:12:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:33.014 15:12:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.014 15:12:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:33.014 15:12:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:33.014 15:12:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:33.014 15:12:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:33.014 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.014 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:33.014 nvme0n1 00:25:33.014 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.014 15:12:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.014 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.014 15:12:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:33.014 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:33.014 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.014 15:12:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.014 15:12:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.014 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.014 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:33.014 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.014 15:12:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:33.014 15:12:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:33.014 15:12:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:33.014 15:12:48 -- host/auth.sh@44 -- # digest=sha384 00:25:33.014 15:12:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.014 15:12:48 -- host/auth.sh@44 -- # keyid=1 00:25:33.014 15:12:48 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:33.014 15:12:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:33.014 15:12:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:33.014 15:12:48 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:33.014 15:12:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:25:33.014 15:12:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:33.014 15:12:48 -- host/auth.sh@68 -- # digest=sha384 00:25:33.014 15:12:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:33.014 15:12:48 -- host/auth.sh@68 -- # keyid=1 00:25:33.014 15:12:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.014 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.014 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:33.273 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.273 15:12:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:33.273 15:12:48 -- nvmf/common.sh@717 -- # local ip 00:25:33.273 15:12:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:33.273 15:12:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:33.273 15:12:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.273 15:12:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.273 15:12:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:33.273 15:12:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.273 15:12:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:33.273 15:12:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:33.273 15:12:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:33.273 15:12:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:33.273 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.273 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:33.273 nvme0n1 00:25:33.273 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.273 15:12:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.273 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.273 15:12:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:33.273 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:33.273 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.273 15:12:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.273 15:12:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.273 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.273 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:33.273 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.273 15:12:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:33.273 15:12:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:33.273 15:12:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:33.273 15:12:48 -- host/auth.sh@44 -- # digest=sha384 00:25:33.273 15:12:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.273 15:12:48 -- host/auth.sh@44 -- # keyid=2 00:25:33.273 15:12:48 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:33.273 15:12:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:33.273 15:12:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:33.273 15:12:48 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:33.273 15:12:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:25:33.273 15:12:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:33.273 15:12:48 -- host/auth.sh@68 -- # digest=sha384 00:25:33.273 15:12:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:33.273 15:12:48 -- host/auth.sh@68 -- # keyid=2 00:25:33.273 15:12:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.273 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.273 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:33.273 15:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.273 15:12:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:33.273 15:12:48 -- nvmf/common.sh@717 -- # local ip 00:25:33.273 15:12:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:33.273 15:12:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:33.273 15:12:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.273 15:12:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.273 15:12:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:33.273 15:12:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.273 15:12:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:33.273 15:12:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:33.273 15:12:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:33.273 15:12:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:33.273 15:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.273 15:12:48 -- common/autotest_common.sh@10 -- # set +x 00:25:33.532 nvme0n1 00:25:33.532 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.532 15:12:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.532 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.532 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:33.532 15:12:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:33.532 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.532 15:12:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.532 15:12:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.532 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.532 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:33.532 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.532 15:12:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:33.532 15:12:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:33.532 15:12:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:33.532 15:12:49 -- host/auth.sh@44 -- # digest=sha384 00:25:33.532 15:12:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.532 15:12:49 -- host/auth.sh@44 -- # keyid=3 00:25:33.532 15:12:49 -- host/auth.sh@45 -- # key=DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:33.532 15:12:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:33.532 15:12:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:33.532 15:12:49 -- host/auth.sh@49 -- # echo DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:33.532 15:12:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:25:33.532 15:12:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:33.532 15:12:49 -- host/auth.sh@68 -- # digest=sha384 00:25:33.532 15:12:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:33.532 15:12:49 -- host/auth.sh@68 -- # keyid=3 00:25:33.532 15:12:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.532 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.532 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:33.532 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.532 15:12:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:33.532 15:12:49 -- nvmf/common.sh@717 -- # local ip 00:25:33.532 15:12:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:33.532 15:12:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:33.532 15:12:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.532 15:12:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.532 15:12:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:33.532 15:12:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.532 15:12:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:33.532 15:12:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:33.532 15:12:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:33.532 15:12:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:33.532 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.532 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:33.532 nvme0n1 00:25:33.532 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.810 15:12:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.810 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.811 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:33.811 15:12:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:33.811 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.811 15:12:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.811 15:12:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.811 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.811 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:33.811 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.811 15:12:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:33.811 15:12:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:33.811 15:12:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:33.811 15:12:49 -- host/auth.sh@44 -- # digest=sha384 00:25:33.811 15:12:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.811 15:12:49 -- host/auth.sh@44 -- # keyid=4 00:25:33.811 15:12:49 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:33.811 15:12:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:33.811 15:12:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:33.811 15:12:49 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:33.811 15:12:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:25:33.811 15:12:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:33.811 15:12:49 -- host/auth.sh@68 -- # digest=sha384 00:25:33.811 15:12:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:33.811 15:12:49 -- host/auth.sh@68 -- # keyid=4 00:25:33.811 15:12:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.811 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.811 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:33.811 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.811 15:12:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:33.811 15:12:49 -- nvmf/common.sh@717 -- # local ip 00:25:33.811 15:12:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:33.811 15:12:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:33.811 15:12:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.811 15:12:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.811 15:12:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:33.811 15:12:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.811 15:12:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:33.811 15:12:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:33.811 15:12:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:33.811 15:12:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.811 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.811 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:33.811 nvme0n1 00:25:33.811 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.811 15:12:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.811 15:12:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:33.811 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.811 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:33.811 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.811 15:12:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.811 15:12:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.811 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.811 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:33.811 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.811 15:12:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.811 15:12:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:33.811 15:12:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:33.811 15:12:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:33.811 15:12:49 -- host/auth.sh@44 -- # digest=sha384 00:25:33.811 15:12:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.811 15:12:49 -- host/auth.sh@44 -- # keyid=0 00:25:33.811 15:12:49 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:33.811 15:12:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:33.811 15:12:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:33.811 15:12:49 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:33.811 15:12:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:25:33.811 15:12:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:33.811 15:12:49 -- host/auth.sh@68 -- # digest=sha384 00:25:33.811 15:12:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:33.811 15:12:49 -- host/auth.sh@68 -- # keyid=0 00:25:33.811 15:12:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:33.811 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.811 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:33.811 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.811 15:12:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:33.811 15:12:49 -- nvmf/common.sh@717 -- # local ip 00:25:33.811 15:12:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:33.811 15:12:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:33.811 15:12:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.811 15:12:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.811 15:12:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:33.811 15:12:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.811 15:12:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:33.811 15:12:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:33.811 15:12:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:34.070 15:12:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:34.070 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.070 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:34.070 nvme0n1 00:25:34.070 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.070 15:12:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:34.070 15:12:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.070 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.070 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:34.070 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.070 15:12:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.070 15:12:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.070 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.070 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:34.070 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.070 15:12:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:34.070 15:12:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:34.070 15:12:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:34.070 15:12:49 -- host/auth.sh@44 -- # digest=sha384 00:25:34.070 15:12:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.070 15:12:49 -- host/auth.sh@44 -- # keyid=1 00:25:34.070 15:12:49 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:34.070 15:12:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:34.070 15:12:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:34.070 15:12:49 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:34.070 15:12:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:25:34.070 15:12:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:34.070 15:12:49 -- host/auth.sh@68 -- # digest=sha384 00:25:34.070 15:12:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:34.070 15:12:49 -- host/auth.sh@68 -- # keyid=1 00:25:34.070 15:12:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:34.070 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.070 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:34.070 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.070 15:12:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:34.070 15:12:49 -- nvmf/common.sh@717 -- # local ip 00:25:34.070 15:12:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:34.070 15:12:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:34.070 15:12:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.070 15:12:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.070 15:12:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:34.070 15:12:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.070 15:12:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:34.070 15:12:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:34.070 15:12:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:34.070 15:12:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:34.070 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.070 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:34.329 nvme0n1 00:25:34.329 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.329 15:12:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.329 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.329 15:12:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:34.329 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:34.329 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.329 15:12:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.329 15:12:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.329 15:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.329 15:12:49 -- common/autotest_common.sh@10 -- # set +x 00:25:34.329 15:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.329 15:12:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:34.329 15:12:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:34.329 15:12:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:34.329 15:12:49 -- host/auth.sh@44 -- # digest=sha384 00:25:34.329 15:12:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.329 15:12:49 -- host/auth.sh@44 -- # keyid=2 00:25:34.329 15:12:49 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:34.329 15:12:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:34.329 15:12:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:34.329 15:12:49 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:34.329 15:12:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:25:34.329 15:12:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:34.329 15:12:49 -- host/auth.sh@68 -- # digest=sha384 00:25:34.329 15:12:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:34.329 15:12:49 -- host/auth.sh@68 -- # keyid=2 00:25:34.330 15:12:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:34.330 15:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.330 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:25:34.330 15:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.330 15:12:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:34.330 15:12:50 -- nvmf/common.sh@717 -- # local ip 00:25:34.330 15:12:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:34.330 15:12:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:34.330 15:12:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.330 15:12:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.330 15:12:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:34.330 15:12:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.330 15:12:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:34.330 15:12:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:34.330 15:12:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:34.330 15:12:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:34.330 15:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.330 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:25:34.588 nvme0n1 00:25:34.588 15:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.588 15:12:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.588 15:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.589 15:12:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:34.589 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:25:34.589 15:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.589 15:12:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.589 15:12:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.589 15:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.589 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:25:34.589 15:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.589 15:12:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:34.589 15:12:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:34.589 15:12:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:34.589 15:12:50 -- host/auth.sh@44 -- # digest=sha384 00:25:34.589 15:12:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.589 15:12:50 -- host/auth.sh@44 -- # keyid=3 00:25:34.589 15:12:50 -- host/auth.sh@45 -- # key=DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:34.589 15:12:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:34.589 15:12:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:34.589 15:12:50 -- host/auth.sh@49 -- # echo DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:34.589 15:12:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:25:34.589 15:12:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:34.589 15:12:50 -- host/auth.sh@68 -- # digest=sha384 00:25:34.589 15:12:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:34.589 15:12:50 -- host/auth.sh@68 -- # keyid=3 00:25:34.589 15:12:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:34.589 15:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.589 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:25:34.589 15:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.589 15:12:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:34.589 15:12:50 -- nvmf/common.sh@717 -- # local ip 00:25:34.589 15:12:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:34.589 15:12:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:34.589 15:12:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.589 15:12:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.589 15:12:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:34.589 15:12:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.589 15:12:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:34.589 15:12:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:34.589 15:12:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:34.589 15:12:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:34.589 15:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.589 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:25:34.848 nvme0n1 00:25:34.848 15:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.848 15:12:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.848 15:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.848 15:12:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:34.848 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:25:34.848 15:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.848 15:12:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.848 15:12:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.848 15:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.848 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:25:34.848 15:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.848 15:12:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:34.848 15:12:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:34.848 15:12:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:34.848 15:12:50 -- host/auth.sh@44 -- # digest=sha384 00:25:34.848 15:12:50 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.848 15:12:50 -- host/auth.sh@44 -- # keyid=4 00:25:34.848 15:12:50 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:34.848 15:12:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:34.848 15:12:50 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:34.848 15:12:50 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:34.848 15:12:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:25:34.848 15:12:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:34.848 15:12:50 -- host/auth.sh@68 -- # digest=sha384 00:25:34.848 15:12:50 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:34.848 15:12:50 -- host/auth.sh@68 -- # keyid=4 00:25:34.848 15:12:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:34.848 15:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.848 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:25:34.848 15:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.848 15:12:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:34.848 15:12:50 -- nvmf/common.sh@717 -- # local ip 00:25:34.848 15:12:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:34.848 15:12:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:34.848 15:12:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.848 15:12:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.848 15:12:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:34.848 15:12:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.848 15:12:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:34.848 15:12:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:34.848 15:12:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:34.848 15:12:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.848 15:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.848 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:25:35.107 nvme0n1 00:25:35.108 15:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.108 15:12:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.108 15:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.108 15:12:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:35.108 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:25:35.108 15:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.108 15:12:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.108 15:12:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.108 15:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.108 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:25:35.108 15:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.108 15:12:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.108 15:12:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:35.108 15:12:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:35.108 15:12:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:35.108 15:12:50 -- host/auth.sh@44 -- # digest=sha384 00:25:35.108 15:12:50 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.108 15:12:50 -- host/auth.sh@44 -- # keyid=0 00:25:35.108 15:12:50 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:35.108 15:12:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:35.108 15:12:50 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:35.108 15:12:50 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:35.108 15:12:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:25:35.108 15:12:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:35.108 15:12:50 -- host/auth.sh@68 -- # digest=sha384 00:25:35.108 15:12:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:35.108 15:12:50 -- host/auth.sh@68 -- # keyid=0 00:25:35.108 15:12:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:35.108 15:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.108 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:25:35.108 15:12:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.108 15:12:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:35.108 15:12:50 -- nvmf/common.sh@717 -- # local ip 00:25:35.108 15:12:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:35.108 15:12:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:35.108 15:12:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.108 15:12:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.108 15:12:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:35.108 15:12:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.108 15:12:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:35.108 15:12:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:35.108 15:12:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:35.108 15:12:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:35.108 15:12:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.108 15:12:50 -- common/autotest_common.sh@10 -- # set +x 00:25:35.367 nvme0n1 00:25:35.367 15:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.367 15:12:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.367 15:12:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:35.367 15:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.367 15:12:51 -- common/autotest_common.sh@10 -- # set +x 00:25:35.625 15:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.625 15:12:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.625 15:12:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.625 15:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.625 15:12:51 -- common/autotest_common.sh@10 -- # set +x 00:25:35.625 15:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.625 15:12:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:35.625 15:12:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:35.625 15:12:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:35.625 15:12:51 -- host/auth.sh@44 -- # digest=sha384 00:25:35.625 15:12:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.625 15:12:51 -- host/auth.sh@44 -- # keyid=1 00:25:35.625 15:12:51 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:35.625 15:12:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:35.625 15:12:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:35.625 15:12:51 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:35.625 15:12:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:25:35.625 15:12:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:35.625 15:12:51 -- host/auth.sh@68 -- # digest=sha384 00:25:35.625 15:12:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:35.625 15:12:51 -- host/auth.sh@68 -- # keyid=1 00:25:35.625 15:12:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:35.625 15:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.625 15:12:51 -- common/autotest_common.sh@10 -- # set +x 00:25:35.625 15:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.625 15:12:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:35.625 15:12:51 -- nvmf/common.sh@717 -- # local ip 00:25:35.625 15:12:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:35.625 15:12:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:35.626 15:12:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.626 15:12:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.626 15:12:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:35.626 15:12:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.626 15:12:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:35.626 15:12:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:35.626 15:12:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:35.626 15:12:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:35.626 15:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.626 15:12:51 -- common/autotest_common.sh@10 -- # set +x 00:25:35.885 nvme0n1 00:25:35.885 15:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.885 15:12:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.885 15:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.885 15:12:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:35.885 15:12:51 -- common/autotest_common.sh@10 -- # set +x 00:25:35.885 15:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.885 15:12:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.885 15:12:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.885 15:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.885 15:12:51 -- common/autotest_common.sh@10 -- # set +x 00:25:35.885 15:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.885 15:12:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:35.885 15:12:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:35.885 15:12:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:35.885 15:12:51 -- host/auth.sh@44 -- # digest=sha384 00:25:35.885 15:12:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.885 15:12:51 -- host/auth.sh@44 -- # keyid=2 00:25:35.885 15:12:51 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:35.885 15:12:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:35.885 15:12:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:35.885 15:12:51 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:35.885 15:12:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:25:35.885 15:12:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:35.885 15:12:51 -- host/auth.sh@68 -- # digest=sha384 00:25:35.885 15:12:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:35.885 15:12:51 -- host/auth.sh@68 -- # keyid=2 00:25:35.885 15:12:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:35.885 15:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.885 15:12:51 -- common/autotest_common.sh@10 -- # set +x 00:25:35.885 15:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.885 15:12:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:35.885 15:12:51 -- nvmf/common.sh@717 -- # local ip 00:25:35.885 15:12:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:35.885 15:12:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:35.885 15:12:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.885 15:12:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.885 15:12:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:35.885 15:12:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.885 15:12:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:35.885 15:12:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:35.885 15:12:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:35.885 15:12:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:35.885 15:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.885 15:12:51 -- common/autotest_common.sh@10 -- # set +x 00:25:36.144 nvme0n1 00:25:36.144 15:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.144 15:12:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.145 15:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.145 15:12:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:36.145 15:12:51 -- common/autotest_common.sh@10 -- # set +x 00:25:36.145 15:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.404 15:12:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.404 15:12:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.404 15:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.404 15:12:51 -- common/autotest_common.sh@10 -- # set +x 00:25:36.404 15:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.404 15:12:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:36.404 15:12:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:36.404 15:12:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:36.404 15:12:51 -- host/auth.sh@44 -- # digest=sha384 00:25:36.404 15:12:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.404 15:12:51 -- host/auth.sh@44 -- # keyid=3 00:25:36.404 15:12:51 -- host/auth.sh@45 -- # key=DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:36.404 15:12:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:36.404 15:12:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:36.404 15:12:51 -- host/auth.sh@49 -- # echo DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:36.404 15:12:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:25:36.404 15:12:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:36.404 15:12:51 -- host/auth.sh@68 -- # digest=sha384 00:25:36.404 15:12:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:36.404 15:12:51 -- host/auth.sh@68 -- # keyid=3 00:25:36.404 15:12:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:36.404 15:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.404 15:12:51 -- common/autotest_common.sh@10 -- # set +x 00:25:36.404 15:12:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.404 15:12:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:36.404 15:12:51 -- nvmf/common.sh@717 -- # local ip 00:25:36.404 15:12:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:36.404 15:12:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:36.404 15:12:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.404 15:12:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.404 15:12:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:36.404 15:12:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.404 15:12:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:36.404 15:12:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:36.404 15:12:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:36.404 15:12:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:36.404 15:12:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.404 15:12:51 -- common/autotest_common.sh@10 -- # set +x 00:25:36.664 nvme0n1 00:25:36.664 15:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.664 15:12:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.664 15:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.664 15:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:36.664 15:12:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:36.664 15:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.664 15:12:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.664 15:12:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.664 15:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.664 15:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:36.664 15:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.664 15:12:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:36.664 15:12:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:36.664 15:12:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:36.664 15:12:52 -- host/auth.sh@44 -- # digest=sha384 00:25:36.664 15:12:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.664 15:12:52 -- host/auth.sh@44 -- # keyid=4 00:25:36.664 15:12:52 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:36.664 15:12:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:36.664 15:12:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:36.664 15:12:52 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:36.664 15:12:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:25:36.664 15:12:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:36.664 15:12:52 -- host/auth.sh@68 -- # digest=sha384 00:25:36.664 15:12:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:36.664 15:12:52 -- host/auth.sh@68 -- # keyid=4 00:25:36.664 15:12:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:36.664 15:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.664 15:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:36.664 15:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.664 15:12:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:36.664 15:12:52 -- nvmf/common.sh@717 -- # local ip 00:25:36.664 15:12:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:36.664 15:12:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:36.664 15:12:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.664 15:12:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.664 15:12:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:36.664 15:12:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.664 15:12:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:36.664 15:12:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:36.664 15:12:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:36.664 15:12:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.664 15:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.664 15:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:36.924 nvme0n1 00:25:36.924 15:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.924 15:12:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.924 15:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.924 15:12:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:36.924 15:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:36.924 15:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.924 15:12:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.924 15:12:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.924 15:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.924 15:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:36.924 15:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.924 15:12:52 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.924 15:12:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:36.924 15:12:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:36.924 15:12:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:36.924 15:12:52 -- host/auth.sh@44 -- # digest=sha384 00:25:36.924 15:12:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.924 15:12:52 -- host/auth.sh@44 -- # keyid=0 00:25:36.924 15:12:52 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:36.924 15:12:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:36.924 15:12:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:36.924 15:12:52 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:36.924 15:12:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:25:36.924 15:12:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:36.924 15:12:52 -- host/auth.sh@68 -- # digest=sha384 00:25:36.924 15:12:52 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:36.924 15:12:52 -- host/auth.sh@68 -- # keyid=0 00:25:36.924 15:12:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:36.924 15:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.184 15:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:37.184 15:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.184 15:12:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:37.184 15:12:52 -- nvmf/common.sh@717 -- # local ip 00:25:37.184 15:12:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:37.184 15:12:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:37.184 15:12:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.184 15:12:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.184 15:12:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:37.184 15:12:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.184 15:12:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:37.184 15:12:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:37.184 15:12:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:37.184 15:12:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:37.184 15:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.184 15:12:52 -- common/autotest_common.sh@10 -- # set +x 00:25:37.443 nvme0n1 00:25:37.443 15:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.702 15:12:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.702 15:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.702 15:12:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:37.702 15:12:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.702 15:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.702 15:12:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.702 15:12:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.702 15:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.702 15:12:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.702 15:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.702 15:12:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:37.702 15:12:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:37.702 15:12:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:37.702 15:12:53 -- host/auth.sh@44 -- # digest=sha384 00:25:37.702 15:12:53 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.702 15:12:53 -- host/auth.sh@44 -- # keyid=1 00:25:37.702 15:12:53 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:37.702 15:12:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:37.702 15:12:53 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:37.702 15:12:53 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:37.702 15:12:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:25:37.702 15:12:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:37.702 15:12:53 -- host/auth.sh@68 -- # digest=sha384 00:25:37.702 15:12:53 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:37.702 15:12:53 -- host/auth.sh@68 -- # keyid=1 00:25:37.702 15:12:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:37.702 15:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.702 15:12:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.702 15:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.702 15:12:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:37.702 15:12:53 -- nvmf/common.sh@717 -- # local ip 00:25:37.702 15:12:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:37.702 15:12:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:37.702 15:12:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.702 15:12:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.702 15:12:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:37.702 15:12:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.702 15:12:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:37.702 15:12:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:37.702 15:12:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:37.702 15:12:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:37.702 15:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.702 15:12:53 -- common/autotest_common.sh@10 -- # set +x 00:25:38.270 nvme0n1 00:25:38.270 15:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.270 15:12:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.270 15:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.270 15:12:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:38.270 15:12:53 -- common/autotest_common.sh@10 -- # set +x 00:25:38.270 15:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.270 15:12:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.270 15:12:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.270 15:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.270 15:12:53 -- common/autotest_common.sh@10 -- # set +x 00:25:38.270 15:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.270 15:12:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:38.270 15:12:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:38.270 15:12:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:38.270 15:12:53 -- host/auth.sh@44 -- # digest=sha384 00:25:38.270 15:12:53 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.270 15:12:53 -- host/auth.sh@44 -- # keyid=2 00:25:38.270 15:12:53 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:38.270 15:12:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:38.270 15:12:53 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:38.270 15:12:53 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:38.270 15:12:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:25:38.270 15:12:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:38.270 15:12:53 -- host/auth.sh@68 -- # digest=sha384 00:25:38.270 15:12:53 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:38.270 15:12:53 -- host/auth.sh@68 -- # keyid=2 00:25:38.270 15:12:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:38.270 15:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.270 15:12:53 -- common/autotest_common.sh@10 -- # set +x 00:25:38.270 15:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.270 15:12:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:38.270 15:12:53 -- nvmf/common.sh@717 -- # local ip 00:25:38.270 15:12:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:38.270 15:12:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:38.270 15:12:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.270 15:12:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.270 15:12:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:38.270 15:12:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.270 15:12:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:38.270 15:12:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:38.270 15:12:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:38.270 15:12:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:38.270 15:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.270 15:12:53 -- common/autotest_common.sh@10 -- # set +x 00:25:38.840 nvme0n1 00:25:38.840 15:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.840 15:12:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:38.840 15:12:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.840 15:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.840 15:12:54 -- common/autotest_common.sh@10 -- # set +x 00:25:38.840 15:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.840 15:12:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.840 15:12:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.840 15:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.840 15:12:54 -- common/autotest_common.sh@10 -- # set +x 00:25:38.840 15:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.840 15:12:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:38.840 15:12:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:38.840 15:12:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:38.840 15:12:54 -- host/auth.sh@44 -- # digest=sha384 00:25:38.840 15:12:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.840 15:12:54 -- host/auth.sh@44 -- # keyid=3 00:25:38.840 15:12:54 -- host/auth.sh@45 -- # key=DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:38.840 15:12:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:38.840 15:12:54 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:38.840 15:12:54 -- host/auth.sh@49 -- # echo DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:38.840 15:12:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:25:38.840 15:12:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:38.840 15:12:54 -- host/auth.sh@68 -- # digest=sha384 00:25:38.840 15:12:54 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:38.840 15:12:54 -- host/auth.sh@68 -- # keyid=3 00:25:38.840 15:12:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:38.840 15:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.840 15:12:54 -- common/autotest_common.sh@10 -- # set +x 00:25:38.840 15:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.840 15:12:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:38.840 15:12:54 -- nvmf/common.sh@717 -- # local ip 00:25:38.840 15:12:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:38.840 15:12:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:38.840 15:12:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.840 15:12:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.840 15:12:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:38.840 15:12:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.840 15:12:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:38.840 15:12:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:38.840 15:12:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:38.840 15:12:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:38.840 15:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.840 15:12:54 -- common/autotest_common.sh@10 -- # set +x 00:25:39.411 nvme0n1 00:25:39.411 15:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.411 15:12:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.411 15:12:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.411 15:12:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:39.411 15:12:54 -- common/autotest_common.sh@10 -- # set +x 00:25:39.411 15:12:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.411 15:12:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.411 15:12:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.411 15:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.411 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:39.411 15:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.411 15:12:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:39.411 15:12:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:39.411 15:12:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:39.411 15:12:55 -- host/auth.sh@44 -- # digest=sha384 00:25:39.411 15:12:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.411 15:12:55 -- host/auth.sh@44 -- # keyid=4 00:25:39.411 15:12:55 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:39.411 15:12:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:39.411 15:12:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:39.411 15:12:55 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:39.411 15:12:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:25:39.411 15:12:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:39.411 15:12:55 -- host/auth.sh@68 -- # digest=sha384 00:25:39.411 15:12:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:39.411 15:12:55 -- host/auth.sh@68 -- # keyid=4 00:25:39.411 15:12:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:39.411 15:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.411 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:39.411 15:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.411 15:12:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:39.411 15:12:55 -- nvmf/common.sh@717 -- # local ip 00:25:39.411 15:12:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:39.411 15:12:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:39.411 15:12:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.411 15:12:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.411 15:12:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:39.411 15:12:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.411 15:12:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:39.411 15:12:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:39.411 15:12:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:39.412 15:12:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.412 15:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.412 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:39.990 nvme0n1 00:25:39.990 15:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.990 15:12:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.990 15:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.990 15:12:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:39.990 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:39.990 15:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.990 15:12:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.990 15:12:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.990 15:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.990 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:39.990 15:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.990 15:12:55 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:25:39.990 15:12:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.990 15:12:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:39.990 15:12:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:39.990 15:12:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:39.990 15:12:55 -- host/auth.sh@44 -- # digest=sha512 00:25:39.990 15:12:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.990 15:12:55 -- host/auth.sh@44 -- # keyid=0 00:25:39.990 15:12:55 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:39.990 15:12:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:39.990 15:12:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:39.990 15:12:55 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:39.990 15:12:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:25:39.990 15:12:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:39.990 15:12:55 -- host/auth.sh@68 -- # digest=sha512 00:25:39.990 15:12:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:39.990 15:12:55 -- host/auth.sh@68 -- # keyid=0 00:25:39.990 15:12:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:39.990 15:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.990 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:39.990 15:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.990 15:12:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:39.990 15:12:55 -- nvmf/common.sh@717 -- # local ip 00:25:39.990 15:12:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:39.990 15:12:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:39.990 15:12:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.990 15:12:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.990 15:12:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:39.990 15:12:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.990 15:12:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:39.990 15:12:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:39.990 15:12:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:39.990 15:12:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:39.990 15:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.990 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:40.258 nvme0n1 00:25:40.258 15:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.258 15:12:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.258 15:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.258 15:12:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:40.258 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:40.258 15:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.258 15:12:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.258 15:12:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.258 15:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.258 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:40.258 15:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.258 15:12:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:40.258 15:12:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:40.258 15:12:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:40.258 15:12:55 -- host/auth.sh@44 -- # digest=sha512 00:25:40.258 15:12:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.258 15:12:55 -- host/auth.sh@44 -- # keyid=1 00:25:40.259 15:12:55 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:40.259 15:12:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:40.259 15:12:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:40.259 15:12:55 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:40.259 15:12:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:25:40.259 15:12:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:40.259 15:12:55 -- host/auth.sh@68 -- # digest=sha512 00:25:40.259 15:12:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:40.259 15:12:55 -- host/auth.sh@68 -- # keyid=1 00:25:40.259 15:12:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:40.259 15:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.259 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:40.259 15:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.259 15:12:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:40.259 15:12:55 -- nvmf/common.sh@717 -- # local ip 00:25:40.259 15:12:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:40.259 15:12:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:40.259 15:12:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.259 15:12:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.259 15:12:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:40.259 15:12:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.259 15:12:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:40.259 15:12:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:40.259 15:12:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:40.259 15:12:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:40.259 15:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.259 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:40.259 nvme0n1 00:25:40.259 15:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.259 15:12:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.259 15:12:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.259 15:12:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:40.259 15:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:40.529 15:12:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.529 15:12:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.529 15:12:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.529 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.529 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.529 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.529 15:12:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:40.529 15:12:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:40.529 15:12:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:40.529 15:12:56 -- host/auth.sh@44 -- # digest=sha512 00:25:40.529 15:12:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.529 15:12:56 -- host/auth.sh@44 -- # keyid=2 00:25:40.529 15:12:56 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:40.529 15:12:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:40.529 15:12:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:40.529 15:12:56 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:40.529 15:12:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:25:40.529 15:12:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:40.529 15:12:56 -- host/auth.sh@68 -- # digest=sha512 00:25:40.529 15:12:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:40.529 15:12:56 -- host/auth.sh@68 -- # keyid=2 00:25:40.529 15:12:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:40.529 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.529 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.529 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.529 15:12:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:40.529 15:12:56 -- nvmf/common.sh@717 -- # local ip 00:25:40.529 15:12:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:40.529 15:12:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:40.529 15:12:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.529 15:12:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.530 15:12:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:40.530 15:12:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.530 15:12:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:40.530 15:12:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:40.530 15:12:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:40.530 15:12:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:40.530 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.530 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.530 nvme0n1 00:25:40.530 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.530 15:12:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.530 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.530 15:12:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:40.530 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.530 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.530 15:12:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.530 15:12:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.530 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.530 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.530 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.530 15:12:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:40.530 15:12:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:40.530 15:12:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:40.530 15:12:56 -- host/auth.sh@44 -- # digest=sha512 00:25:40.530 15:12:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.530 15:12:56 -- host/auth.sh@44 -- # keyid=3 00:25:40.530 15:12:56 -- host/auth.sh@45 -- # key=DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:40.530 15:12:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:40.530 15:12:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:40.530 15:12:56 -- host/auth.sh@49 -- # echo DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:40.530 15:12:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:25:40.530 15:12:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:40.530 15:12:56 -- host/auth.sh@68 -- # digest=sha512 00:25:40.530 15:12:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:40.530 15:12:56 -- host/auth.sh@68 -- # keyid=3 00:25:40.530 15:12:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:40.530 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.530 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.530 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.530 15:12:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:40.530 15:12:56 -- nvmf/common.sh@717 -- # local ip 00:25:40.530 15:12:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:40.530 15:12:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:40.530 15:12:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.530 15:12:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.530 15:12:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:40.530 15:12:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.530 15:12:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:40.530 15:12:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:40.530 15:12:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:40.792 15:12:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:40.792 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.792 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.792 nvme0n1 00:25:40.792 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.792 15:12:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.792 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.792 15:12:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:40.792 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.792 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.792 15:12:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.792 15:12:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.792 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.792 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.792 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.792 15:12:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:40.792 15:12:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:40.792 15:12:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:40.792 15:12:56 -- host/auth.sh@44 -- # digest=sha512 00:25:40.792 15:12:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.792 15:12:56 -- host/auth.sh@44 -- # keyid=4 00:25:40.792 15:12:56 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:40.792 15:12:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:40.792 15:12:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:40.792 15:12:56 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:40.792 15:12:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:25:40.792 15:12:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:40.792 15:12:56 -- host/auth.sh@68 -- # digest=sha512 00:25:40.792 15:12:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:40.792 15:12:56 -- host/auth.sh@68 -- # keyid=4 00:25:40.792 15:12:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:40.792 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.792 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.792 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.792 15:12:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:40.792 15:12:56 -- nvmf/common.sh@717 -- # local ip 00:25:40.792 15:12:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:40.792 15:12:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:40.792 15:12:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.792 15:12:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.792 15:12:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:40.792 15:12:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.792 15:12:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:40.792 15:12:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:40.792 15:12:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:40.792 15:12:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:40.792 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.792 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.050 nvme0n1 00:25:41.050 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.050 15:12:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.050 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.050 15:12:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:41.050 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.050 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.050 15:12:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.050 15:12:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.050 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.050 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.050 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.050 15:12:56 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.050 15:12:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:41.050 15:12:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:41.050 15:12:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:41.050 15:12:56 -- host/auth.sh@44 -- # digest=sha512 00:25:41.050 15:12:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.050 15:12:56 -- host/auth.sh@44 -- # keyid=0 00:25:41.050 15:12:56 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:41.050 15:12:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:41.050 15:12:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:41.050 15:12:56 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:41.051 15:12:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:25:41.051 15:12:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:41.051 15:12:56 -- host/auth.sh@68 -- # digest=sha512 00:25:41.051 15:12:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:41.051 15:12:56 -- host/auth.sh@68 -- # keyid=0 00:25:41.051 15:12:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:41.051 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.051 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.051 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.051 15:12:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:41.051 15:12:56 -- nvmf/common.sh@717 -- # local ip 00:25:41.051 15:12:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:41.051 15:12:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:41.051 15:12:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.051 15:12:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.051 15:12:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:41.051 15:12:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.051 15:12:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:41.051 15:12:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:41.051 15:12:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:41.051 15:12:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:41.051 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.051 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.051 nvme0n1 00:25:41.051 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.051 15:12:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.051 15:12:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:41.051 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.051 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.051 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.309 15:12:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.309 15:12:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.309 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.309 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.309 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.309 15:12:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:41.309 15:12:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:41.309 15:12:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:41.309 15:12:56 -- host/auth.sh@44 -- # digest=sha512 00:25:41.309 15:12:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.309 15:12:56 -- host/auth.sh@44 -- # keyid=1 00:25:41.309 15:12:56 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:41.309 15:12:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:41.309 15:12:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:41.309 15:12:56 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:41.309 15:12:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:25:41.309 15:12:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:41.309 15:12:56 -- host/auth.sh@68 -- # digest=sha512 00:25:41.309 15:12:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:41.309 15:12:56 -- host/auth.sh@68 -- # keyid=1 00:25:41.309 15:12:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:41.309 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.309 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.309 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.309 15:12:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:41.309 15:12:56 -- nvmf/common.sh@717 -- # local ip 00:25:41.309 15:12:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:41.309 15:12:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:41.309 15:12:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.309 15:12:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.309 15:12:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:41.309 15:12:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.309 15:12:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:41.309 15:12:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:41.309 15:12:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:41.309 15:12:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:41.309 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.309 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.309 nvme0n1 00:25:41.309 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.309 15:12:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.309 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.309 15:12:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:41.309 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.309 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.309 15:12:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.309 15:12:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.309 15:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.309 15:12:56 -- common/autotest_common.sh@10 -- # set +x 00:25:41.309 15:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.309 15:12:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:41.309 15:12:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:41.309 15:12:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:41.309 15:12:57 -- host/auth.sh@44 -- # digest=sha512 00:25:41.309 15:12:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.309 15:12:57 -- host/auth.sh@44 -- # keyid=2 00:25:41.309 15:12:57 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:41.309 15:12:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:41.309 15:12:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:41.309 15:12:57 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:41.309 15:12:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:25:41.309 15:12:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:41.309 15:12:57 -- host/auth.sh@68 -- # digest=sha512 00:25:41.309 15:12:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:41.309 15:12:57 -- host/auth.sh@68 -- # keyid=2 00:25:41.309 15:12:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:41.309 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.309 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.568 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.568 15:12:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:41.568 15:12:57 -- nvmf/common.sh@717 -- # local ip 00:25:41.568 15:12:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:41.568 15:12:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:41.568 15:12:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.568 15:12:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.568 15:12:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:41.568 15:12:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.568 15:12:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:41.568 15:12:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:41.568 15:12:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:41.568 15:12:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:41.568 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.568 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.568 nvme0n1 00:25:41.568 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.568 15:12:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.568 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.568 15:12:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:41.568 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.568 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.568 15:12:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.568 15:12:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.568 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.568 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.568 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.568 15:12:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:41.568 15:12:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:41.568 15:12:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:41.568 15:12:57 -- host/auth.sh@44 -- # digest=sha512 00:25:41.568 15:12:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.568 15:12:57 -- host/auth.sh@44 -- # keyid=3 00:25:41.568 15:12:57 -- host/auth.sh@45 -- # key=DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:41.568 15:12:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:41.568 15:12:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:41.568 15:12:57 -- host/auth.sh@49 -- # echo DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:41.568 15:12:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:25:41.568 15:12:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:41.568 15:12:57 -- host/auth.sh@68 -- # digest=sha512 00:25:41.568 15:12:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:41.568 15:12:57 -- host/auth.sh@68 -- # keyid=3 00:25:41.568 15:12:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:41.568 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.568 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.568 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.568 15:12:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:41.568 15:12:57 -- nvmf/common.sh@717 -- # local ip 00:25:41.568 15:12:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:41.568 15:12:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:41.568 15:12:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.568 15:12:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.568 15:12:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:41.568 15:12:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.568 15:12:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:41.568 15:12:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:41.568 15:12:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:41.568 15:12:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:41.568 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.568 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.827 nvme0n1 00:25:41.827 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.827 15:12:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.827 15:12:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:41.827 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.827 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.827 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.827 15:12:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.827 15:12:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.827 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.827 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.827 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.827 15:12:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:41.827 15:12:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:41.827 15:12:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:41.827 15:12:57 -- host/auth.sh@44 -- # digest=sha512 00:25:41.827 15:12:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.827 15:12:57 -- host/auth.sh@44 -- # keyid=4 00:25:41.827 15:12:57 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:41.827 15:12:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:41.827 15:12:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:41.827 15:12:57 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:41.827 15:12:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:25:41.827 15:12:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:41.827 15:12:57 -- host/auth.sh@68 -- # digest=sha512 00:25:41.827 15:12:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:41.827 15:12:57 -- host/auth.sh@68 -- # keyid=4 00:25:41.827 15:12:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:41.827 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.827 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:41.827 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.827 15:12:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:41.827 15:12:57 -- nvmf/common.sh@717 -- # local ip 00:25:41.827 15:12:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:41.827 15:12:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:41.827 15:12:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.827 15:12:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.827 15:12:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:41.827 15:12:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.827 15:12:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:41.827 15:12:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:41.827 15:12:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:41.828 15:12:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.828 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.828 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:42.086 nvme0n1 00:25:42.086 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.086 15:12:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.086 15:12:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:42.086 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.086 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:42.086 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.086 15:12:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.086 15:12:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.086 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.086 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:42.086 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.086 15:12:57 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.086 15:12:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:42.086 15:12:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:42.086 15:12:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:42.086 15:12:57 -- host/auth.sh@44 -- # digest=sha512 00:25:42.086 15:12:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.086 15:12:57 -- host/auth.sh@44 -- # keyid=0 00:25:42.086 15:12:57 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:42.086 15:12:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:42.086 15:12:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:42.086 15:12:57 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:42.086 15:12:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:25:42.086 15:12:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:42.086 15:12:57 -- host/auth.sh@68 -- # digest=sha512 00:25:42.086 15:12:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:42.086 15:12:57 -- host/auth.sh@68 -- # keyid=0 00:25:42.086 15:12:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:42.086 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.086 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:42.086 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.086 15:12:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:42.086 15:12:57 -- nvmf/common.sh@717 -- # local ip 00:25:42.086 15:12:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:42.086 15:12:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:42.086 15:12:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.086 15:12:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.086 15:12:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:42.086 15:12:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.086 15:12:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:42.086 15:12:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:42.086 15:12:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:42.086 15:12:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:42.086 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.086 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:42.344 nvme0n1 00:25:42.344 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.344 15:12:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.344 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.344 15:12:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:42.344 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:42.344 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.344 15:12:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.344 15:12:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.344 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.344 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:42.344 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.344 15:12:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:42.344 15:12:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:42.344 15:12:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:42.344 15:12:57 -- host/auth.sh@44 -- # digest=sha512 00:25:42.344 15:12:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.344 15:12:57 -- host/auth.sh@44 -- # keyid=1 00:25:42.344 15:12:57 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:42.344 15:12:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:42.345 15:12:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:42.345 15:12:57 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:42.345 15:12:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:25:42.345 15:12:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:42.345 15:12:57 -- host/auth.sh@68 -- # digest=sha512 00:25:42.345 15:12:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:42.345 15:12:57 -- host/auth.sh@68 -- # keyid=1 00:25:42.345 15:12:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:42.345 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.345 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:42.345 15:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.345 15:12:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:42.345 15:12:57 -- nvmf/common.sh@717 -- # local ip 00:25:42.345 15:12:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:42.345 15:12:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:42.345 15:12:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.345 15:12:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.345 15:12:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:42.345 15:12:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.345 15:12:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:42.345 15:12:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:42.345 15:12:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:42.345 15:12:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:42.345 15:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.345 15:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:42.602 nvme0n1 00:25:42.602 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.602 15:12:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.602 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.602 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:42.602 15:12:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:42.602 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.602 15:12:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.602 15:12:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.602 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.602 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:42.602 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.602 15:12:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:42.602 15:12:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:42.602 15:12:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:42.602 15:12:58 -- host/auth.sh@44 -- # digest=sha512 00:25:42.602 15:12:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.602 15:12:58 -- host/auth.sh@44 -- # keyid=2 00:25:42.602 15:12:58 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:42.602 15:12:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:42.602 15:12:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:42.602 15:12:58 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:42.602 15:12:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:25:42.602 15:12:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:42.602 15:12:58 -- host/auth.sh@68 -- # digest=sha512 00:25:42.602 15:12:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:42.602 15:12:58 -- host/auth.sh@68 -- # keyid=2 00:25:42.602 15:12:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:42.602 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.602 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:42.602 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.602 15:12:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:42.602 15:12:58 -- nvmf/common.sh@717 -- # local ip 00:25:42.602 15:12:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:42.602 15:12:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:42.602 15:12:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.602 15:12:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.602 15:12:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:42.602 15:12:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.602 15:12:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:42.602 15:12:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:42.602 15:12:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:42.602 15:12:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:42.602 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.602 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:42.861 nvme0n1 00:25:42.861 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.861 15:12:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.861 15:12:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:42.861 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.861 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:42.861 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.861 15:12:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.861 15:12:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.861 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.861 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:42.861 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.861 15:12:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:42.861 15:12:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:42.861 15:12:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:42.861 15:12:58 -- host/auth.sh@44 -- # digest=sha512 00:25:42.861 15:12:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.861 15:12:58 -- host/auth.sh@44 -- # keyid=3 00:25:42.861 15:12:58 -- host/auth.sh@45 -- # key=DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:42.861 15:12:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:42.861 15:12:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:42.861 15:12:58 -- host/auth.sh@49 -- # echo DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:42.861 15:12:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:25:42.861 15:12:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:42.861 15:12:58 -- host/auth.sh@68 -- # digest=sha512 00:25:42.861 15:12:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:42.861 15:12:58 -- host/auth.sh@68 -- # keyid=3 00:25:42.861 15:12:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:42.861 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.861 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:42.861 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.861 15:12:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:42.861 15:12:58 -- nvmf/common.sh@717 -- # local ip 00:25:42.861 15:12:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:42.861 15:12:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:42.861 15:12:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.861 15:12:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.861 15:12:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:42.861 15:12:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.861 15:12:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:42.861 15:12:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:42.861 15:12:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:42.861 15:12:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:42.861 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.861 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:43.120 nvme0n1 00:25:43.120 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.120 15:12:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.120 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.120 15:12:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:43.120 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:43.120 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.120 15:12:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.120 15:12:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.120 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.120 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:43.120 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.120 15:12:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:43.120 15:12:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:43.120 15:12:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:43.120 15:12:58 -- host/auth.sh@44 -- # digest=sha512 00:25:43.120 15:12:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.120 15:12:58 -- host/auth.sh@44 -- # keyid=4 00:25:43.120 15:12:58 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:43.120 15:12:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:43.120 15:12:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:43.120 15:12:58 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:43.120 15:12:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:25:43.120 15:12:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:43.120 15:12:58 -- host/auth.sh@68 -- # digest=sha512 00:25:43.120 15:12:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:43.120 15:12:58 -- host/auth.sh@68 -- # keyid=4 00:25:43.120 15:12:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:43.120 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.120 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:43.120 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.120 15:12:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:43.120 15:12:58 -- nvmf/common.sh@717 -- # local ip 00:25:43.120 15:12:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:43.120 15:12:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:43.120 15:12:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.120 15:12:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.120 15:12:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:43.120 15:12:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.120 15:12:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:43.120 15:12:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:43.120 15:12:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:43.120 15:12:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.120 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.120 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:43.381 nvme0n1 00:25:43.381 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.381 15:12:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.381 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.381 15:12:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:43.381 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:43.381 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.381 15:12:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.381 15:12:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.381 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.381 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:43.381 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.381 15:12:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.381 15:12:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:43.381 15:12:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:43.381 15:12:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:43.381 15:12:58 -- host/auth.sh@44 -- # digest=sha512 00:25:43.381 15:12:58 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.381 15:12:58 -- host/auth.sh@44 -- # keyid=0 00:25:43.381 15:12:58 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:43.381 15:12:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:43.381 15:12:58 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:43.381 15:12:58 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:43.381 15:12:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:25:43.381 15:12:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:43.381 15:12:58 -- host/auth.sh@68 -- # digest=sha512 00:25:43.381 15:12:58 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:43.381 15:12:58 -- host/auth.sh@68 -- # keyid=0 00:25:43.381 15:12:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:43.381 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.381 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:43.381 15:12:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.381 15:12:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:43.381 15:12:58 -- nvmf/common.sh@717 -- # local ip 00:25:43.381 15:12:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:43.381 15:12:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:43.381 15:12:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.381 15:12:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.381 15:12:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:43.381 15:12:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.381 15:12:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:43.381 15:12:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:43.381 15:12:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:43.381 15:12:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:43.381 15:12:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.381 15:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:43.655 nvme0n1 00:25:43.655 15:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.655 15:12:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.655 15:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.655 15:12:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:43.655 15:12:59 -- common/autotest_common.sh@10 -- # set +x 00:25:43.655 15:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.655 15:12:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.655 15:12:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.655 15:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.655 15:12:59 -- common/autotest_common.sh@10 -- # set +x 00:25:43.655 15:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.655 15:12:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:43.655 15:12:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:43.655 15:12:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:43.655 15:12:59 -- host/auth.sh@44 -- # digest=sha512 00:25:43.655 15:12:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.655 15:12:59 -- host/auth.sh@44 -- # keyid=1 00:25:43.655 15:12:59 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:43.655 15:12:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:43.655 15:12:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:43.655 15:12:59 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:43.655 15:12:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:25:43.655 15:12:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:43.655 15:12:59 -- host/auth.sh@68 -- # digest=sha512 00:25:43.655 15:12:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:43.655 15:12:59 -- host/auth.sh@68 -- # keyid=1 00:25:43.655 15:12:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:43.655 15:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.655 15:12:59 -- common/autotest_common.sh@10 -- # set +x 00:25:43.655 15:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.655 15:12:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:43.655 15:12:59 -- nvmf/common.sh@717 -- # local ip 00:25:43.655 15:12:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:43.655 15:12:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:43.655 15:12:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.655 15:12:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.655 15:12:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:43.655 15:12:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.655 15:12:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:43.655 15:12:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:43.655 15:12:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:43.655 15:12:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:43.655 15:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.655 15:12:59 -- common/autotest_common.sh@10 -- # set +x 00:25:44.224 nvme0n1 00:25:44.224 15:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.224 15:12:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.224 15:12:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:44.224 15:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.224 15:12:59 -- common/autotest_common.sh@10 -- # set +x 00:25:44.224 15:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.224 15:12:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.224 15:12:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.224 15:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.224 15:12:59 -- common/autotest_common.sh@10 -- # set +x 00:25:44.224 15:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.224 15:12:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:44.224 15:12:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:44.224 15:12:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:44.224 15:12:59 -- host/auth.sh@44 -- # digest=sha512 00:25:44.224 15:12:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:44.224 15:12:59 -- host/auth.sh@44 -- # keyid=2 00:25:44.224 15:12:59 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:44.224 15:12:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:44.224 15:12:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:44.224 15:12:59 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:44.224 15:12:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:25:44.224 15:12:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:44.224 15:12:59 -- host/auth.sh@68 -- # digest=sha512 00:25:44.224 15:12:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:44.224 15:12:59 -- host/auth.sh@68 -- # keyid=2 00:25:44.224 15:12:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:44.224 15:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.224 15:12:59 -- common/autotest_common.sh@10 -- # set +x 00:25:44.224 15:12:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.224 15:12:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:44.224 15:12:59 -- nvmf/common.sh@717 -- # local ip 00:25:44.224 15:12:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:44.224 15:12:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:44.224 15:12:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.224 15:12:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.224 15:12:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:44.224 15:12:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.224 15:12:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:44.224 15:12:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:44.224 15:12:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:44.224 15:12:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:44.224 15:12:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.224 15:12:59 -- common/autotest_common.sh@10 -- # set +x 00:25:44.483 nvme0n1 00:25:44.483 15:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.483 15:13:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.483 15:13:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:44.483 15:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.483 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:44.483 15:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.483 15:13:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.483 15:13:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.483 15:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.483 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:44.483 15:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.483 15:13:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:44.483 15:13:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:44.483 15:13:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:44.483 15:13:00 -- host/auth.sh@44 -- # digest=sha512 00:25:44.483 15:13:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:44.483 15:13:00 -- host/auth.sh@44 -- # keyid=3 00:25:44.483 15:13:00 -- host/auth.sh@45 -- # key=DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:44.483 15:13:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:44.483 15:13:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:44.483 15:13:00 -- host/auth.sh@49 -- # echo DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:44.483 15:13:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:25:44.483 15:13:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:44.483 15:13:00 -- host/auth.sh@68 -- # digest=sha512 00:25:44.483 15:13:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:44.483 15:13:00 -- host/auth.sh@68 -- # keyid=3 00:25:44.483 15:13:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:44.483 15:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.483 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:44.483 15:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.483 15:13:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:44.483 15:13:00 -- nvmf/common.sh@717 -- # local ip 00:25:44.483 15:13:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:44.483 15:13:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:44.483 15:13:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.483 15:13:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.483 15:13:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:44.483 15:13:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.483 15:13:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:44.483 15:13:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:44.483 15:13:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:44.483 15:13:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:44.483 15:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.483 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:44.742 nvme0n1 00:25:44.742 15:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.742 15:13:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.742 15:13:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:44.742 15:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.742 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:45.001 15:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.001 15:13:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.001 15:13:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.001 15:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.001 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:45.001 15:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.001 15:13:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:45.001 15:13:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:45.001 15:13:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:45.001 15:13:00 -- host/auth.sh@44 -- # digest=sha512 00:25:45.001 15:13:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.001 15:13:00 -- host/auth.sh@44 -- # keyid=4 00:25:45.001 15:13:00 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:45.001 15:13:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:45.001 15:13:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:45.001 15:13:00 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:45.001 15:13:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:25:45.001 15:13:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:45.001 15:13:00 -- host/auth.sh@68 -- # digest=sha512 00:25:45.001 15:13:00 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:45.001 15:13:00 -- host/auth.sh@68 -- # keyid=4 00:25:45.001 15:13:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:45.001 15:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.001 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:45.001 15:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.001 15:13:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:45.001 15:13:00 -- nvmf/common.sh@717 -- # local ip 00:25:45.001 15:13:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:45.001 15:13:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:45.001 15:13:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.001 15:13:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.001 15:13:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:45.001 15:13:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.001 15:13:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:45.001 15:13:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:45.001 15:13:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:45.001 15:13:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.001 15:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.001 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:45.271 nvme0n1 00:25:45.271 15:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.271 15:13:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.271 15:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.271 15:13:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:45.271 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:45.271 15:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.271 15:13:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.271 15:13:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.271 15:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.271 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:45.271 15:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.271 15:13:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.271 15:13:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:45.271 15:13:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:45.271 15:13:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:45.271 15:13:00 -- host/auth.sh@44 -- # digest=sha512 00:25:45.271 15:13:00 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.271 15:13:00 -- host/auth.sh@44 -- # keyid=0 00:25:45.271 15:13:00 -- host/auth.sh@45 -- # key=DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:45.271 15:13:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:45.271 15:13:00 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:45.271 15:13:00 -- host/auth.sh@49 -- # echo DHHC-1:00:NzRmODg4OWZmZTk3Mjk0NDY0ODBlYThlZjk4MDdlOGGwpNZR: 00:25:45.271 15:13:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:25:45.271 15:13:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:45.271 15:13:00 -- host/auth.sh@68 -- # digest=sha512 00:25:45.271 15:13:00 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:45.271 15:13:00 -- host/auth.sh@68 -- # keyid=0 00:25:45.271 15:13:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:45.271 15:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.271 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:45.271 15:13:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.271 15:13:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:45.271 15:13:00 -- nvmf/common.sh@717 -- # local ip 00:25:45.271 15:13:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:45.271 15:13:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:45.271 15:13:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.271 15:13:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.271 15:13:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:45.271 15:13:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.271 15:13:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:45.271 15:13:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:45.271 15:13:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:45.271 15:13:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:45.271 15:13:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.271 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:45.851 nvme0n1 00:25:45.851 15:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.851 15:13:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.851 15:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.851 15:13:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:45.851 15:13:01 -- common/autotest_common.sh@10 -- # set +x 00:25:45.851 15:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.851 15:13:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.851 15:13:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.851 15:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.851 15:13:01 -- common/autotest_common.sh@10 -- # set +x 00:25:45.851 15:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.851 15:13:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:45.851 15:13:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:45.851 15:13:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:45.851 15:13:01 -- host/auth.sh@44 -- # digest=sha512 00:25:45.851 15:13:01 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.851 15:13:01 -- host/auth.sh@44 -- # keyid=1 00:25:45.851 15:13:01 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:45.851 15:13:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:45.851 15:13:01 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:45.851 15:13:01 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:45.851 15:13:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:25:45.851 15:13:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:45.851 15:13:01 -- host/auth.sh@68 -- # digest=sha512 00:25:45.851 15:13:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:45.851 15:13:01 -- host/auth.sh@68 -- # keyid=1 00:25:45.851 15:13:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:45.851 15:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.851 15:13:01 -- common/autotest_common.sh@10 -- # set +x 00:25:45.851 15:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.851 15:13:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:45.851 15:13:01 -- nvmf/common.sh@717 -- # local ip 00:25:45.851 15:13:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:45.851 15:13:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:45.851 15:13:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.851 15:13:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.851 15:13:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:45.851 15:13:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.851 15:13:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:45.851 15:13:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:45.851 15:13:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:45.851 15:13:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:45.851 15:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.851 15:13:01 -- common/autotest_common.sh@10 -- # set +x 00:25:46.418 nvme0n1 00:25:46.418 15:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.418 15:13:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.418 15:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.418 15:13:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:46.418 15:13:02 -- common/autotest_common.sh@10 -- # set +x 00:25:46.418 15:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.418 15:13:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.418 15:13:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.418 15:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.418 15:13:02 -- common/autotest_common.sh@10 -- # set +x 00:25:46.418 15:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.418 15:13:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:46.418 15:13:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:46.418 15:13:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:46.418 15:13:02 -- host/auth.sh@44 -- # digest=sha512 00:25:46.418 15:13:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.418 15:13:02 -- host/auth.sh@44 -- # keyid=2 00:25:46.418 15:13:02 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:46.418 15:13:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:46.418 15:13:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:46.418 15:13:02 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2U0Nzk4ZTc3ZDgxNDFhNGMzZjMyYzU3YjZkNTU5YmFIlf9t: 00:25:46.418 15:13:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:25:46.418 15:13:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:46.418 15:13:02 -- host/auth.sh@68 -- # digest=sha512 00:25:46.418 15:13:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:46.418 15:13:02 -- host/auth.sh@68 -- # keyid=2 00:25:46.418 15:13:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:46.418 15:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.418 15:13:02 -- common/autotest_common.sh@10 -- # set +x 00:25:46.676 15:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.676 15:13:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:46.676 15:13:02 -- nvmf/common.sh@717 -- # local ip 00:25:46.676 15:13:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:46.676 15:13:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:46.676 15:13:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.676 15:13:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.676 15:13:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:46.676 15:13:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.676 15:13:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:46.676 15:13:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:46.676 15:13:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:46.676 15:13:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:46.676 15:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.676 15:13:02 -- common/autotest_common.sh@10 -- # set +x 00:25:47.243 nvme0n1 00:25:47.243 15:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.243 15:13:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.243 15:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.243 15:13:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:47.243 15:13:02 -- common/autotest_common.sh@10 -- # set +x 00:25:47.243 15:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.243 15:13:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.243 15:13:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.243 15:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.243 15:13:02 -- common/autotest_common.sh@10 -- # set +x 00:25:47.243 15:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.243 15:13:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:47.243 15:13:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:47.243 15:13:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:47.243 15:13:02 -- host/auth.sh@44 -- # digest=sha512 00:25:47.243 15:13:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.243 15:13:02 -- host/auth.sh@44 -- # keyid=3 00:25:47.243 15:13:02 -- host/auth.sh@45 -- # key=DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:47.243 15:13:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:47.243 15:13:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:47.243 15:13:02 -- host/auth.sh@49 -- # echo DHHC-1:02:N2VkOTgxY2QxMWU4NDU4MjgxZjQxMWZiZjVkZDUxYTRjYjVmMWU0ZTEyZWRhZGNkznHdFQ==: 00:25:47.243 15:13:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:25:47.243 15:13:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:47.243 15:13:02 -- host/auth.sh@68 -- # digest=sha512 00:25:47.243 15:13:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:47.243 15:13:02 -- host/auth.sh@68 -- # keyid=3 00:25:47.243 15:13:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:47.243 15:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.243 15:13:02 -- common/autotest_common.sh@10 -- # set +x 00:25:47.243 15:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.243 15:13:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:47.243 15:13:02 -- nvmf/common.sh@717 -- # local ip 00:25:47.243 15:13:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:47.243 15:13:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:47.244 15:13:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.244 15:13:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.244 15:13:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:47.244 15:13:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.244 15:13:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:47.244 15:13:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:47.244 15:13:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:47.244 15:13:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:47.244 15:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.244 15:13:02 -- common/autotest_common.sh@10 -- # set +x 00:25:47.810 nvme0n1 00:25:47.810 15:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.810 15:13:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.810 15:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.810 15:13:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:47.810 15:13:03 -- common/autotest_common.sh@10 -- # set +x 00:25:47.810 15:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.810 15:13:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.810 15:13:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.810 15:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.810 15:13:03 -- common/autotest_common.sh@10 -- # set +x 00:25:47.810 15:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.810 15:13:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:47.810 15:13:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:47.810 15:13:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:47.810 15:13:03 -- host/auth.sh@44 -- # digest=sha512 00:25:47.810 15:13:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.810 15:13:03 -- host/auth.sh@44 -- # keyid=4 00:25:47.810 15:13:03 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:47.810 15:13:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:47.810 15:13:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:47.810 15:13:03 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQwZDEwOTY2MjUxYTZjMmY5ZmY4ZjIxNDg0ODg3NjMzNTBlYjQyYWU4YThjYmRkMWNkZjM1MWE4MWM0YTBmOLNf4yo=: 00:25:47.810 15:13:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:25:47.810 15:13:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:47.810 15:13:03 -- host/auth.sh@68 -- # digest=sha512 00:25:47.810 15:13:03 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:47.810 15:13:03 -- host/auth.sh@68 -- # keyid=4 00:25:47.810 15:13:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:47.810 15:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.810 15:13:03 -- common/autotest_common.sh@10 -- # set +x 00:25:47.810 15:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.810 15:13:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:47.810 15:13:03 -- nvmf/common.sh@717 -- # local ip 00:25:47.810 15:13:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:47.810 15:13:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:47.810 15:13:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.810 15:13:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.810 15:13:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:47.810 15:13:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.810 15:13:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:47.810 15:13:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:47.810 15:13:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:47.810 15:13:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:47.810 15:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.810 15:13:03 -- common/autotest_common.sh@10 -- # set +x 00:25:48.375 nvme0n1 00:25:48.375 15:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.375 15:13:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.375 15:13:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:48.375 15:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.375 15:13:03 -- common/autotest_common.sh@10 -- # set +x 00:25:48.375 15:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.375 15:13:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.375 15:13:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.375 15:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.375 15:13:03 -- common/autotest_common.sh@10 -- # set +x 00:25:48.375 15:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.375 15:13:03 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:48.375 15:13:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:48.375 15:13:03 -- host/auth.sh@44 -- # digest=sha256 00:25:48.375 15:13:03 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.375 15:13:03 -- host/auth.sh@44 -- # keyid=1 00:25:48.375 15:13:03 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:48.375 15:13:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:48.375 15:13:03 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:48.375 15:13:03 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQ2YTM5ZTcwZDJkYWU1MzQ1NDVhZWY1Mjk3ZWFhNzdjZTBhNWNhOWUzOTkyNzVm7rjidQ==: 00:25:48.375 15:13:03 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:48.375 15:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.375 15:13:03 -- common/autotest_common.sh@10 -- # set +x 00:25:48.375 15:13:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.375 15:13:03 -- host/auth.sh@119 -- # get_main_ns_ip 00:25:48.375 15:13:03 -- nvmf/common.sh@717 -- # local ip 00:25:48.375 15:13:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:48.375 15:13:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:48.375 15:13:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.375 15:13:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.375 15:13:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:48.375 15:13:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.375 15:13:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:48.375 15:13:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:48.375 15:13:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:48.375 15:13:03 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:48.375 15:13:03 -- common/autotest_common.sh@638 -- # local es=0 00:25:48.375 15:13:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:48.375 15:13:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:48.375 15:13:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:48.375 15:13:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:48.375 15:13:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:48.375 15:13:03 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:48.375 15:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.375 15:13:03 -- common/autotest_common.sh@10 -- # set +x 00:25:48.375 2024/04/18 15:13:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:25:48.375 request: 00:25:48.375 { 00:25:48.375 "method": "bdev_nvme_attach_controller", 00:25:48.375 "params": { 00:25:48.375 "name": "nvme0", 00:25:48.375 "trtype": "tcp", 00:25:48.375 "traddr": "10.0.0.1", 00:25:48.375 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:48.375 "adrfam": "ipv4", 00:25:48.375 "trsvcid": "4420", 00:25:48.375 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:25:48.375 } 00:25:48.375 } 00:25:48.375 Got JSON-RPC error response 00:25:48.375 GoRPCClient: error on JSON-RPC call 00:25:48.375 15:13:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:48.375 15:13:03 -- common/autotest_common.sh@641 -- # es=1 00:25:48.375 15:13:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:48.375 15:13:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:48.375 15:13:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:48.375 15:13:03 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.375 15:13:03 -- host/auth.sh@121 -- # jq length 00:25:48.375 15:13:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.375 15:13:03 -- common/autotest_common.sh@10 -- # set +x 00:25:48.375 15:13:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.375 15:13:04 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:25:48.375 15:13:04 -- host/auth.sh@124 -- # get_main_ns_ip 00:25:48.376 15:13:04 -- nvmf/common.sh@717 -- # local ip 00:25:48.376 15:13:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:48.376 15:13:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:48.376 15:13:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.376 15:13:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.376 15:13:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:48.376 15:13:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.376 15:13:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:48.376 15:13:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:48.376 15:13:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:48.376 15:13:04 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:48.376 15:13:04 -- common/autotest_common.sh@638 -- # local es=0 00:25:48.376 15:13:04 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:48.376 15:13:04 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:48.376 15:13:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:48.376 15:13:04 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:48.376 15:13:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:48.376 15:13:04 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:48.376 15:13:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.376 15:13:04 -- common/autotest_common.sh@10 -- # set +x 00:25:48.376 2024/04/18 15:13:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:25:48.376 request: 00:25:48.376 { 00:25:48.376 "method": "bdev_nvme_attach_controller", 00:25:48.376 "params": { 00:25:48.376 "name": "nvme0", 00:25:48.376 "trtype": "tcp", 00:25:48.376 "traddr": "10.0.0.1", 00:25:48.376 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:48.376 "adrfam": "ipv4", 00:25:48.376 "trsvcid": "4420", 00:25:48.376 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:48.376 "dhchap_key": "key2" 00:25:48.376 } 00:25:48.376 } 00:25:48.634 Got JSON-RPC error response 00:25:48.634 GoRPCClient: error on JSON-RPC call 00:25:48.634 15:13:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:48.634 15:13:04 -- common/autotest_common.sh@641 -- # es=1 00:25:48.634 15:13:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:48.634 15:13:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:48.634 15:13:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:48.634 15:13:04 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.634 15:13:04 -- host/auth.sh@127 -- # jq length 00:25:48.634 15:13:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.634 15:13:04 -- common/autotest_common.sh@10 -- # set +x 00:25:48.634 15:13:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.634 15:13:04 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:25:48.634 15:13:04 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:48.634 15:13:04 -- host/auth.sh@130 -- # cleanup 00:25:48.634 15:13:04 -- host/auth.sh@24 -- # nvmftestfini 00:25:48.634 15:13:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:48.634 15:13:04 -- nvmf/common.sh@117 -- # sync 00:25:48.634 15:13:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:48.634 15:13:04 -- nvmf/common.sh@120 -- # set +e 00:25:48.634 15:13:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:48.634 15:13:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:48.634 rmmod nvme_tcp 00:25:48.634 rmmod nvme_fabrics 00:25:48.634 15:13:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:48.634 15:13:04 -- nvmf/common.sh@124 -- # set -e 00:25:48.634 15:13:04 -- nvmf/common.sh@125 -- # return 0 00:25:48.634 15:13:04 -- nvmf/common.sh@478 -- # '[' -n 83575 ']' 00:25:48.634 15:13:04 -- nvmf/common.sh@479 -- # killprocess 83575 00:25:48.634 15:13:04 -- common/autotest_common.sh@936 -- # '[' -z 83575 ']' 00:25:48.634 15:13:04 -- common/autotest_common.sh@940 -- # kill -0 83575 00:25:48.634 15:13:04 -- common/autotest_common.sh@941 -- # uname 00:25:48.634 15:13:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:48.634 15:13:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83575 00:25:48.634 killing process with pid 83575 00:25:48.634 15:13:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:48.634 15:13:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:48.634 15:13:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83575' 00:25:48.634 15:13:04 -- common/autotest_common.sh@955 -- # kill 83575 00:25:48.634 15:13:04 -- common/autotest_common.sh@960 -- # wait 83575 00:25:48.892 15:13:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:48.892 15:13:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:48.892 15:13:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:48.892 15:13:04 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:48.892 15:13:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:48.892 15:13:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.892 15:13:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.892 15:13:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.892 15:13:04 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:48.892 15:13:04 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:48.892 15:13:04 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:48.892 15:13:04 -- host/auth.sh@27 -- # clean_kernel_target 00:25:48.892 15:13:04 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:48.892 15:13:04 -- nvmf/common.sh@675 -- # echo 0 00:25:48.892 15:13:04 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:48.892 15:13:04 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:48.892 15:13:04 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:48.892 15:13:04 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:48.892 15:13:04 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:25:48.892 15:13:04 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:25:48.892 15:13:04 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:49.827 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:50.086 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:50.086 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:50.086 15:13:05 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.JTH /tmp/spdk.key-null.iGK /tmp/spdk.key-sha256.ktP /tmp/spdk.key-sha384.uaX /tmp/spdk.key-sha512.Hqn /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:25:50.086 15:13:05 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:50.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:50.654 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:50.654 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:50.654 ************************************ 00:25:50.654 END TEST nvmf_auth 00:25:50.654 ************************************ 00:25:50.654 00:25:50.654 real 0m36.750s 00:25:50.654 user 0m33.602s 00:25:50.654 sys 0m5.203s 00:25:50.654 15:13:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:50.654 15:13:06 -- common/autotest_common.sh@10 -- # set +x 00:25:50.912 15:13:06 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:25:50.912 15:13:06 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:50.912 15:13:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:50.912 15:13:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:50.912 15:13:06 -- common/autotest_common.sh@10 -- # set +x 00:25:50.912 ************************************ 00:25:50.912 START TEST nvmf_digest 00:25:50.912 ************************************ 00:25:50.912 15:13:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:50.912 * Looking for test storage... 00:25:51.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:51.171 15:13:06 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:51.171 15:13:06 -- nvmf/common.sh@7 -- # uname -s 00:25:51.171 15:13:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.171 15:13:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.171 15:13:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.171 15:13:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.171 15:13:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.171 15:13:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.171 15:13:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.171 15:13:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.171 15:13:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.171 15:13:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.171 15:13:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:25:51.171 15:13:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:25:51.171 15:13:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.171 15:13:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.171 15:13:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:51.171 15:13:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.171 15:13:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:51.171 15:13:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.171 15:13:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.171 15:13:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.171 15:13:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.171 15:13:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.172 15:13:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.172 15:13:06 -- paths/export.sh@5 -- # export PATH 00:25:51.172 15:13:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.172 15:13:06 -- nvmf/common.sh@47 -- # : 0 00:25:51.172 15:13:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:51.172 15:13:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:51.172 15:13:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.172 15:13:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.172 15:13:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.172 15:13:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:51.172 15:13:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:51.172 15:13:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:51.172 15:13:06 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:51.172 15:13:06 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:51.172 15:13:06 -- host/digest.sh@16 -- # runtime=2 00:25:51.172 15:13:06 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:51.172 15:13:06 -- host/digest.sh@138 -- # nvmftestinit 00:25:51.172 15:13:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:51.172 15:13:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.172 15:13:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:51.172 15:13:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:51.172 15:13:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:51.172 15:13:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.172 15:13:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.172 15:13:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.172 15:13:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:51.172 15:13:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:51.172 15:13:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:51.172 15:13:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:51.172 15:13:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:51.172 15:13:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:51.172 15:13:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.172 15:13:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.172 15:13:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:51.172 15:13:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:51.172 15:13:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:51.172 15:13:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:51.172 15:13:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:51.172 15:13:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.172 15:13:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:51.172 15:13:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:51.172 15:13:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:51.172 15:13:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:51.172 15:13:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:51.172 15:13:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:51.172 Cannot find device "nvmf_tgt_br" 00:25:51.172 15:13:06 -- nvmf/common.sh@155 -- # true 00:25:51.172 15:13:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:51.172 Cannot find device "nvmf_tgt_br2" 00:25:51.172 15:13:06 -- nvmf/common.sh@156 -- # true 00:25:51.172 15:13:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:51.172 15:13:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:51.172 Cannot find device "nvmf_tgt_br" 00:25:51.172 15:13:06 -- nvmf/common.sh@158 -- # true 00:25:51.172 15:13:06 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:51.172 Cannot find device "nvmf_tgt_br2" 00:25:51.172 15:13:06 -- nvmf/common.sh@159 -- # true 00:25:51.172 15:13:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:51.172 15:13:06 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:51.172 15:13:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:51.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:51.172 15:13:06 -- nvmf/common.sh@162 -- # true 00:25:51.172 15:13:06 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:51.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:51.172 15:13:06 -- nvmf/common.sh@163 -- # true 00:25:51.172 15:13:06 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:51.172 15:13:06 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:51.475 15:13:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:51.475 15:13:06 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:51.475 15:13:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:51.475 15:13:06 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:51.475 15:13:06 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:51.475 15:13:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:51.475 15:13:06 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:51.475 15:13:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:51.475 15:13:06 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:51.475 15:13:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:51.475 15:13:06 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:51.475 15:13:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:51.475 15:13:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:51.475 15:13:06 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:51.475 15:13:06 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:51.475 15:13:06 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:51.475 15:13:06 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:51.475 15:13:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:51.475 15:13:07 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:51.475 15:13:07 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:51.475 15:13:07 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:51.475 15:13:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:51.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:25:51.475 00:25:51.475 --- 10.0.0.2 ping statistics --- 00:25:51.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.475 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:25:51.475 15:13:07 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:51.475 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:51.475 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:25:51.475 00:25:51.475 --- 10.0.0.3 ping statistics --- 00:25:51.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.475 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:25:51.475 15:13:07 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:51.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:25:51.475 00:25:51.475 --- 10.0.0.1 ping statistics --- 00:25:51.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.475 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:25:51.475 15:13:07 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.475 15:13:07 -- nvmf/common.sh@422 -- # return 0 00:25:51.475 15:13:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:51.475 15:13:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.475 15:13:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:51.475 15:13:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:51.475 15:13:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.475 15:13:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:51.475 15:13:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:51.475 15:13:07 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:51.475 15:13:07 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:51.475 15:13:07 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:51.475 15:13:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:51.475 15:13:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:51.475 15:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:51.734 ************************************ 00:25:51.734 START TEST nvmf_digest_clean 00:25:51.734 ************************************ 00:25:51.734 15:13:07 -- common/autotest_common.sh@1111 -- # run_digest 00:25:51.734 15:13:07 -- host/digest.sh@120 -- # local dsa_initiator 00:25:51.734 15:13:07 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:51.734 15:13:07 -- host/digest.sh@121 -- # dsa_initiator=false 00:25:51.734 15:13:07 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:51.734 15:13:07 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:51.734 15:13:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:51.734 15:13:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:51.734 15:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:51.734 15:13:07 -- nvmf/common.sh@470 -- # nvmfpid=85191 00:25:51.734 15:13:07 -- nvmf/common.sh@471 -- # waitforlisten 85191 00:25:51.734 15:13:07 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:51.734 15:13:07 -- common/autotest_common.sh@817 -- # '[' -z 85191 ']' 00:25:51.734 15:13:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.734 15:13:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:51.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.734 15:13:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.734 15:13:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:51.734 15:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:51.734 [2024-04-18 15:13:07.296317] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:25:51.734 [2024-04-18 15:13:07.296408] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.734 [2024-04-18 15:13:07.427658] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.992 [2024-04-18 15:13:07.545508] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.992 [2024-04-18 15:13:07.545603] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.992 [2024-04-18 15:13:07.545631] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.992 [2024-04-18 15:13:07.545641] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.992 [2024-04-18 15:13:07.545649] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.992 [2024-04-18 15:13:07.545684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.560 15:13:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:52.560 15:13:08 -- common/autotest_common.sh@850 -- # return 0 00:25:52.560 15:13:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:52.560 15:13:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:52.560 15:13:08 -- common/autotest_common.sh@10 -- # set +x 00:25:52.819 15:13:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.819 15:13:08 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:52.819 15:13:08 -- host/digest.sh@126 -- # common_target_config 00:25:52.819 15:13:08 -- host/digest.sh@43 -- # rpc_cmd 00:25:52.819 15:13:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.819 15:13:08 -- common/autotest_common.sh@10 -- # set +x 00:25:52.819 null0 00:25:52.819 [2024-04-18 15:13:08.378800] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.819 [2024-04-18 15:13:08.402918] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.819 15:13:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.819 15:13:08 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:52.819 15:13:08 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:52.819 15:13:08 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:52.819 15:13:08 -- host/digest.sh@80 -- # rw=randread 00:25:52.819 15:13:08 -- host/digest.sh@80 -- # bs=4096 00:25:52.819 15:13:08 -- host/digest.sh@80 -- # qd=128 00:25:52.819 15:13:08 -- host/digest.sh@80 -- # scan_dsa=false 00:25:52.819 15:13:08 -- host/digest.sh@83 -- # bperfpid=85242 00:25:52.819 15:13:08 -- host/digest.sh@84 -- # waitforlisten 85242 /var/tmp/bperf.sock 00:25:52.819 15:13:08 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:52.819 15:13:08 -- common/autotest_common.sh@817 -- # '[' -z 85242 ']' 00:25:52.819 15:13:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:52.819 15:13:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:52.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:52.819 15:13:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:52.819 15:13:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:52.819 15:13:08 -- common/autotest_common.sh@10 -- # set +x 00:25:52.819 [2024-04-18 15:13:08.461289] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:25:52.819 [2024-04-18 15:13:08.461373] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85242 ] 00:25:53.079 [2024-04-18 15:13:08.605662] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.079 [2024-04-18 15:13:08.711148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.017 15:13:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:54.017 15:13:09 -- common/autotest_common.sh@850 -- # return 0 00:25:54.017 15:13:09 -- host/digest.sh@86 -- # false 00:25:54.017 15:13:09 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:54.017 15:13:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:54.276 15:13:09 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:54.276 15:13:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:54.535 nvme0n1 00:25:54.535 15:13:10 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:54.535 15:13:10 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:54.535 Running I/O for 2 seconds... 00:25:57.073 00:25:57.073 Latency(us) 00:25:57.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.073 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:57.073 nvme0n1 : 2.01 21566.57 84.24 0.00 0.00 5926.87 3013.60 14844.30 00:25:57.073 =================================================================================================================== 00:25:57.073 Total : 21566.57 84.24 0.00 0.00 5926.87 3013.60 14844.30 00:25:57.073 0 00:25:57.073 15:13:12 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:57.073 15:13:12 -- host/digest.sh@93 -- # get_accel_stats 00:25:57.073 15:13:12 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:57.073 15:13:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:57.073 15:13:12 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:57.073 | select(.opcode=="crc32c") 00:25:57.073 | "\(.module_name) \(.executed)"' 00:25:57.073 15:13:12 -- host/digest.sh@94 -- # false 00:25:57.073 15:13:12 -- host/digest.sh@94 -- # exp_module=software 00:25:57.073 15:13:12 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:57.073 15:13:12 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:57.073 15:13:12 -- host/digest.sh@98 -- # killprocess 85242 00:25:57.073 15:13:12 -- common/autotest_common.sh@936 -- # '[' -z 85242 ']' 00:25:57.073 15:13:12 -- common/autotest_common.sh@940 -- # kill -0 85242 00:25:57.073 15:13:12 -- common/autotest_common.sh@941 -- # uname 00:25:57.073 15:13:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:57.073 15:13:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85242 00:25:57.073 15:13:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:57.073 15:13:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:57.073 15:13:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85242' 00:25:57.073 killing process with pid 85242 00:25:57.073 15:13:12 -- common/autotest_common.sh@955 -- # kill 85242 00:25:57.073 Received shutdown signal, test time was about 2.000000 seconds 00:25:57.073 00:25:57.073 Latency(us) 00:25:57.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:57.073 =================================================================================================================== 00:25:57.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:57.073 15:13:12 -- common/autotest_common.sh@960 -- # wait 85242 00:25:57.073 15:13:12 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:57.073 15:13:12 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:57.073 15:13:12 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:57.073 15:13:12 -- host/digest.sh@80 -- # rw=randread 00:25:57.073 15:13:12 -- host/digest.sh@80 -- # bs=131072 00:25:57.073 15:13:12 -- host/digest.sh@80 -- # qd=16 00:25:57.073 15:13:12 -- host/digest.sh@80 -- # scan_dsa=false 00:25:57.073 15:13:12 -- host/digest.sh@83 -- # bperfpid=85328 00:25:57.073 15:13:12 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:57.073 15:13:12 -- host/digest.sh@84 -- # waitforlisten 85328 /var/tmp/bperf.sock 00:25:57.073 15:13:12 -- common/autotest_common.sh@817 -- # '[' -z 85328 ']' 00:25:57.073 15:13:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:57.073 15:13:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:57.073 15:13:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:57.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:57.073 15:13:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:57.073 15:13:12 -- common/autotest_common.sh@10 -- # set +x 00:25:57.073 [2024-04-18 15:13:12.734602] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:25:57.073 [2024-04-18 15:13:12.734700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85328 ] 00:25:57.073 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:57.073 Zero copy mechanism will not be used. 00:25:57.331 [2024-04-18 15:13:12.881150] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.331 [2024-04-18 15:13:13.001499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.268 15:13:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:58.268 15:13:13 -- common/autotest_common.sh@850 -- # return 0 00:25:58.268 15:13:13 -- host/digest.sh@86 -- # false 00:25:58.268 15:13:13 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:58.268 15:13:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:58.528 15:13:14 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:58.528 15:13:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:58.788 nvme0n1 00:25:58.788 15:13:14 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:58.788 15:13:14 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:58.788 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:58.788 Zero copy mechanism will not be used. 00:25:58.788 Running I/O for 2 seconds... 00:26:00.710 00:26:00.710 Latency(us) 00:26:00.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.711 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:00.711 nvme0n1 : 2.00 7673.19 959.15 0.00 0.00 2081.91 575.74 7211.59 00:26:00.711 =================================================================================================================== 00:26:00.711 Total : 7673.19 959.15 0.00 0.00 2081.91 575.74 7211.59 00:26:00.711 0 00:26:00.711 15:13:16 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:00.711 15:13:16 -- host/digest.sh@93 -- # get_accel_stats 00:26:00.711 15:13:16 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:00.711 15:13:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:00.711 15:13:16 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:00.711 | select(.opcode=="crc32c") 00:26:00.711 | "\(.module_name) \(.executed)"' 00:26:00.970 15:13:16 -- host/digest.sh@94 -- # false 00:26:00.970 15:13:16 -- host/digest.sh@94 -- # exp_module=software 00:26:00.970 15:13:16 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:00.970 15:13:16 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:00.970 15:13:16 -- host/digest.sh@98 -- # killprocess 85328 00:26:00.970 15:13:16 -- common/autotest_common.sh@936 -- # '[' -z 85328 ']' 00:26:00.970 15:13:16 -- common/autotest_common.sh@940 -- # kill -0 85328 00:26:00.970 15:13:16 -- common/autotest_common.sh@941 -- # uname 00:26:00.970 15:13:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:00.971 15:13:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85328 00:26:00.971 15:13:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:00.971 killing process with pid 85328 00:26:00.971 15:13:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:00.971 15:13:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85328' 00:26:00.971 15:13:16 -- common/autotest_common.sh@955 -- # kill 85328 00:26:00.971 Received shutdown signal, test time was about 2.000000 seconds 00:26:00.971 00:26:00.971 Latency(us) 00:26:00.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.971 =================================================================================================================== 00:26:00.971 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:00.971 15:13:16 -- common/autotest_common.sh@960 -- # wait 85328 00:26:01.230 15:13:16 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:01.230 15:13:16 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:01.230 15:13:16 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:01.230 15:13:16 -- host/digest.sh@80 -- # rw=randwrite 00:26:01.230 15:13:16 -- host/digest.sh@80 -- # bs=4096 00:26:01.230 15:13:16 -- host/digest.sh@80 -- # qd=128 00:26:01.230 15:13:16 -- host/digest.sh@80 -- # scan_dsa=false 00:26:01.230 15:13:16 -- host/digest.sh@83 -- # bperfpid=85418 00:26:01.230 15:13:16 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:01.230 15:13:16 -- host/digest.sh@84 -- # waitforlisten 85418 /var/tmp/bperf.sock 00:26:01.230 15:13:16 -- common/autotest_common.sh@817 -- # '[' -z 85418 ']' 00:26:01.230 15:13:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:01.230 15:13:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:01.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:01.230 15:13:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:01.230 15:13:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:01.230 15:13:16 -- common/autotest_common.sh@10 -- # set +x 00:26:01.489 [2024-04-18 15:13:16.945248] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:26:01.489 [2024-04-18 15:13:16.945371] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85418 ] 00:26:01.489 [2024-04-18 15:13:17.097025] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.748 [2024-04-18 15:13:17.200230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.317 15:13:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:02.317 15:13:17 -- common/autotest_common.sh@850 -- # return 0 00:26:02.317 15:13:17 -- host/digest.sh@86 -- # false 00:26:02.317 15:13:17 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:02.317 15:13:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:02.576 15:13:18 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:02.577 15:13:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:02.836 nvme0n1 00:26:02.836 15:13:18 -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:02.836 15:13:18 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:03.095 Running I/O for 2 seconds... 00:26:05.002 00:26:05.002 Latency(us) 00:26:05.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.002 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:05.002 nvme0n1 : 2.00 25782.61 100.71 0.00 0.00 4959.82 2592.49 13159.84 00:26:05.002 =================================================================================================================== 00:26:05.002 Total : 25782.61 100.71 0.00 0.00 4959.82 2592.49 13159.84 00:26:05.002 0 00:26:05.002 15:13:20 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:05.002 15:13:20 -- host/digest.sh@93 -- # get_accel_stats 00:26:05.002 15:13:20 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:05.002 15:13:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:05.002 15:13:20 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:05.002 | select(.opcode=="crc32c") 00:26:05.002 | "\(.module_name) \(.executed)"' 00:26:05.261 15:13:20 -- host/digest.sh@94 -- # false 00:26:05.261 15:13:20 -- host/digest.sh@94 -- # exp_module=software 00:26:05.261 15:13:20 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:05.261 15:13:20 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:05.261 15:13:20 -- host/digest.sh@98 -- # killprocess 85418 00:26:05.261 15:13:20 -- common/autotest_common.sh@936 -- # '[' -z 85418 ']' 00:26:05.261 15:13:20 -- common/autotest_common.sh@940 -- # kill -0 85418 00:26:05.261 15:13:20 -- common/autotest_common.sh@941 -- # uname 00:26:05.261 15:13:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:05.261 15:13:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85418 00:26:05.261 15:13:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:05.261 15:13:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:05.261 15:13:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85418' 00:26:05.261 killing process with pid 85418 00:26:05.261 15:13:20 -- common/autotest_common.sh@955 -- # kill 85418 00:26:05.261 Received shutdown signal, test time was about 2.000000 seconds 00:26:05.261 00:26:05.261 Latency(us) 00:26:05.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.261 =================================================================================================================== 00:26:05.261 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:05.261 15:13:20 -- common/autotest_common.sh@960 -- # wait 85418 00:26:05.520 15:13:21 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:05.520 15:13:21 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:05.520 15:13:21 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:05.520 15:13:21 -- host/digest.sh@80 -- # rw=randwrite 00:26:05.520 15:13:21 -- host/digest.sh@80 -- # bs=131072 00:26:05.520 15:13:21 -- host/digest.sh@80 -- # qd=16 00:26:05.520 15:13:21 -- host/digest.sh@80 -- # scan_dsa=false 00:26:05.520 15:13:21 -- host/digest.sh@83 -- # bperfpid=85505 00:26:05.520 15:13:21 -- host/digest.sh@84 -- # waitforlisten 85505 /var/tmp/bperf.sock 00:26:05.520 15:13:21 -- common/autotest_common.sh@817 -- # '[' -z 85505 ']' 00:26:05.520 15:13:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:05.520 15:13:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:05.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:05.520 15:13:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:05.520 15:13:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:05.520 15:13:21 -- common/autotest_common.sh@10 -- # set +x 00:26:05.520 15:13:21 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:05.520 [2024-04-18 15:13:21.168620] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:26:05.520 [2024-04-18 15:13:21.168706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85505 ] 00:26:05.520 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:05.520 Zero copy mechanism will not be used. 00:26:05.779 [2024-04-18 15:13:21.308311] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.779 [2024-04-18 15:13:21.412940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.715 15:13:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:06.715 15:13:22 -- common/autotest_common.sh@850 -- # return 0 00:26:06.715 15:13:22 -- host/digest.sh@86 -- # false 00:26:06.715 15:13:22 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:06.715 15:13:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:06.715 15:13:22 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.715 15:13:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.974 nvme0n1 00:26:07.233 15:13:22 -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:07.233 15:13:22 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:07.233 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:07.233 Zero copy mechanism will not be used. 00:26:07.233 Running I/O for 2 seconds... 00:26:09.138 00:26:09.138 Latency(us) 00:26:09.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.138 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:09.138 nvme0n1 : 2.00 8158.38 1019.80 0.00 0.00 1957.57 1427.84 11633.30 00:26:09.138 =================================================================================================================== 00:26:09.138 Total : 8158.38 1019.80 0.00 0.00 1957.57 1427.84 11633.30 00:26:09.138 0 00:26:09.138 15:13:24 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:09.138 15:13:24 -- host/digest.sh@93 -- # get_accel_stats 00:26:09.138 15:13:24 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:09.138 15:13:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:09.138 15:13:24 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:09.138 | select(.opcode=="crc32c") 00:26:09.138 | "\(.module_name) \(.executed)"' 00:26:09.397 15:13:25 -- host/digest.sh@94 -- # false 00:26:09.397 15:13:25 -- host/digest.sh@94 -- # exp_module=software 00:26:09.397 15:13:25 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:09.397 15:13:25 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:09.397 15:13:25 -- host/digest.sh@98 -- # killprocess 85505 00:26:09.397 15:13:25 -- common/autotest_common.sh@936 -- # '[' -z 85505 ']' 00:26:09.397 15:13:25 -- common/autotest_common.sh@940 -- # kill -0 85505 00:26:09.397 15:13:25 -- common/autotest_common.sh@941 -- # uname 00:26:09.397 15:13:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:09.397 15:13:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85505 00:26:09.656 15:13:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:09.656 15:13:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:09.656 killing process with pid 85505 00:26:09.656 15:13:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85505' 00:26:09.656 15:13:25 -- common/autotest_common.sh@955 -- # kill 85505 00:26:09.656 Received shutdown signal, test time was about 2.000000 seconds 00:26:09.656 00:26:09.656 Latency(us) 00:26:09.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.656 =================================================================================================================== 00:26:09.656 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.656 15:13:25 -- common/autotest_common.sh@960 -- # wait 85505 00:26:09.656 15:13:25 -- host/digest.sh@132 -- # killprocess 85191 00:26:09.657 15:13:25 -- common/autotest_common.sh@936 -- # '[' -z 85191 ']' 00:26:09.657 15:13:25 -- common/autotest_common.sh@940 -- # kill -0 85191 00:26:09.657 15:13:25 -- common/autotest_common.sh@941 -- # uname 00:26:09.916 15:13:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:09.916 15:13:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85191 00:26:09.916 15:13:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:09.916 15:13:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:09.916 15:13:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85191' 00:26:09.916 killing process with pid 85191 00:26:09.916 15:13:25 -- common/autotest_common.sh@955 -- # kill 85191 00:26:09.916 15:13:25 -- common/autotest_common.sh@960 -- # wait 85191 00:26:09.916 00:26:09.916 real 0m18.391s 00:26:09.916 user 0m34.182s 00:26:09.916 sys 0m5.212s 00:26:09.916 15:13:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:09.916 ************************************ 00:26:09.916 END TEST nvmf_digest_clean 00:26:09.916 ************************************ 00:26:09.916 15:13:25 -- common/autotest_common.sh@10 -- # set +x 00:26:10.175 15:13:25 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:10.175 15:13:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:10.175 15:13:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:10.175 15:13:25 -- common/autotest_common.sh@10 -- # set +x 00:26:10.175 ************************************ 00:26:10.175 START TEST nvmf_digest_error 00:26:10.175 ************************************ 00:26:10.175 15:13:25 -- common/autotest_common.sh@1111 -- # run_digest_error 00:26:10.175 15:13:25 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:10.175 15:13:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:10.175 15:13:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:10.175 15:13:25 -- common/autotest_common.sh@10 -- # set +x 00:26:10.175 15:13:25 -- nvmf/common.sh@470 -- # nvmfpid=85630 00:26:10.175 15:13:25 -- nvmf/common.sh@471 -- # waitforlisten 85630 00:26:10.175 15:13:25 -- common/autotest_common.sh@817 -- # '[' -z 85630 ']' 00:26:10.176 15:13:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.176 15:13:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:10.176 15:13:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.176 15:13:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:10.176 15:13:25 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:10.176 15:13:25 -- common/autotest_common.sh@10 -- # set +x 00:26:10.176 [2024-04-18 15:13:25.858192] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:26:10.176 [2024-04-18 15:13:25.858280] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:10.437 [2024-04-18 15:13:26.003754] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.726 [2024-04-18 15:13:26.158066] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:10.726 [2024-04-18 15:13:26.158146] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:10.726 [2024-04-18 15:13:26.158156] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:10.726 [2024-04-18 15:13:26.158166] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:10.726 [2024-04-18 15:13:26.158174] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:10.726 [2024-04-18 15:13:26.158220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.295 15:13:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:11.295 15:13:26 -- common/autotest_common.sh@850 -- # return 0 00:26:11.295 15:13:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:11.295 15:13:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:11.295 15:13:26 -- common/autotest_common.sh@10 -- # set +x 00:26:11.295 15:13:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:11.295 15:13:26 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:11.295 15:13:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.295 15:13:26 -- common/autotest_common.sh@10 -- # set +x 00:26:11.295 [2024-04-18 15:13:26.841700] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:11.295 15:13:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.295 15:13:26 -- host/digest.sh@105 -- # common_target_config 00:26:11.295 15:13:26 -- host/digest.sh@43 -- # rpc_cmd 00:26:11.295 15:13:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.295 15:13:26 -- common/autotest_common.sh@10 -- # set +x 00:26:11.295 null0 00:26:11.295 [2024-04-18 15:13:26.943766] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.295 [2024-04-18 15:13:26.967900] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.295 15:13:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.295 15:13:26 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:11.295 15:13:26 -- host/digest.sh@54 -- # local rw bs qd 00:26:11.295 15:13:26 -- host/digest.sh@56 -- # rw=randread 00:26:11.295 15:13:26 -- host/digest.sh@56 -- # bs=4096 00:26:11.295 15:13:26 -- host/digest.sh@56 -- # qd=128 00:26:11.295 15:13:26 -- host/digest.sh@58 -- # bperfpid=85675 00:26:11.295 15:13:26 -- host/digest.sh@60 -- # waitforlisten 85675 /var/tmp/bperf.sock 00:26:11.295 15:13:26 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:11.295 15:13:26 -- common/autotest_common.sh@817 -- # '[' -z 85675 ']' 00:26:11.295 15:13:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:11.295 15:13:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:11.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:11.295 15:13:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:11.295 15:13:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:11.295 15:13:26 -- common/autotest_common.sh@10 -- # set +x 00:26:11.554 [2024-04-18 15:13:27.030580] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:26:11.554 [2024-04-18 15:13:27.030667] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85675 ] 00:26:11.554 [2024-04-18 15:13:27.175224] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.813 [2024-04-18 15:13:27.278018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.381 15:13:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:12.381 15:13:27 -- common/autotest_common.sh@850 -- # return 0 00:26:12.381 15:13:27 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:12.381 15:13:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:12.641 15:13:28 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:12.641 15:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.641 15:13:28 -- common/autotest_common.sh@10 -- # set +x 00:26:12.641 15:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.641 15:13:28 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.641 15:13:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.900 nvme0n1 00:26:12.900 15:13:28 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:12.900 15:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.900 15:13:28 -- common/autotest_common.sh@10 -- # set +x 00:26:12.900 15:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.900 15:13:28 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:12.900 15:13:28 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:13.161 Running I/O for 2 seconds... 00:26:13.161 [2024-04-18 15:13:28.656666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.161 [2024-04-18 15:13:28.656745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.161 [2024-04-18 15:13:28.656759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.161 [2024-04-18 15:13:28.668503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.161 [2024-04-18 15:13:28.668584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.161 [2024-04-18 15:13:28.668597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.161 [2024-04-18 15:13:28.680137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.161 [2024-04-18 15:13:28.680201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.161 [2024-04-18 15:13:28.680214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.161 [2024-04-18 15:13:28.692500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.161 [2024-04-18 15:13:28.692582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.161 [2024-04-18 15:13:28.692597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.161 [2024-04-18 15:13:28.703976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.161 [2024-04-18 15:13:28.704043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.161 [2024-04-18 15:13:28.704056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.161 [2024-04-18 15:13:28.715967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.162 [2024-04-18 15:13:28.716033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.162 [2024-04-18 15:13:28.716049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.162 [2024-04-18 15:13:28.727009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.162 [2024-04-18 15:13:28.727068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.162 [2024-04-18 15:13:28.727082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.162 [2024-04-18 15:13:28.737147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.162 [2024-04-18 15:13:28.737214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.162 [2024-04-18 15:13:28.737228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.162 [2024-04-18 15:13:28.748827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.162 [2024-04-18 15:13:28.748891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.162 [2024-04-18 15:13:28.748906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.162 [2024-04-18 15:13:28.759875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.162 [2024-04-18 15:13:28.759946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.162 [2024-04-18 15:13:28.759960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.162 [2024-04-18 15:13:28.771279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.162 [2024-04-18 15:13:28.771346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.162 [2024-04-18 15:13:28.771361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.162 [2024-04-18 15:13:28.783806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.162 [2024-04-18 15:13:28.783876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.162 [2024-04-18 15:13:28.783891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.162 [2024-04-18 15:13:28.794910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.162 [2024-04-18 15:13:28.794979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.162 [2024-04-18 15:13:28.794993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.162 [2024-04-18 15:13:28.806174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.162 [2024-04-18 15:13:28.806247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.162 [2024-04-18 15:13:28.806271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.162 [2024-04-18 15:13:28.817461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.162 [2024-04-18 15:13:28.817529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.162 [2024-04-18 15:13:28.817560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.162 [2024-04-18 15:13:28.826019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.162 [2024-04-18 15:13:28.826081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.162 [2024-04-18 15:13:28.826097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.162 [2024-04-18 15:13:28.838812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.162 [2024-04-18 15:13:28.838889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.162 [2024-04-18 15:13:28.838906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.162 [2024-04-18 15:13:28.851186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.162 [2024-04-18 15:13:28.851252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.162 [2024-04-18 15:13:28.851266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.162 [2024-04-18 15:13:28.862685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.162 [2024-04-18 15:13:28.862747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.162 [2024-04-18 15:13:28.862762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:28.874861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:28.874928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:28.874942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:28.884379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:28.884439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:28.884451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:28.896764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:28.896839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:28.896859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:28.909310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:28.909377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:28.909390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:28.921625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:28.921706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:28.921720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:28.933357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:28.933430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:28.933444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:28.945034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:28.945102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:28.945116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:28.956277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:28.956345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:28.956358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:28.967238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:28.967304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:28.967319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:28.979780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:28.979849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:28.979863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:28.991049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:28.991113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:28.991128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:29.002483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:29.002579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:29.002594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:29.014726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:29.014795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:29.014808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:29.026760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:29.026832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:29.026847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:29.037305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:29.037371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:29.037385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:29.047015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:29.047080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:29.047094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:29.061077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:29.061145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:29.061160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:29.072741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:29.072804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:29.072818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:29.082248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:29.082309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:29.082323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:29.094364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:29.094440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:29.094454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:29.107379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:29.107449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:29.107464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:29.120444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:29.120511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:29.120527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.432 [2024-04-18 15:13:29.129966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.432 [2024-04-18 15:13:29.130033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.432 [2024-04-18 15:13:29.130047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.142775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.142843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.142858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.153325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.153393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.153407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.164369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.164432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.164445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.176389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.176460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.176474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.187213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.187282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.187296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.198891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.198962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.198976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.209694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.209757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.209771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.220402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.220473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.220487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.232475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.232560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.232574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.242879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.242947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.242961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.255600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.255668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.255682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.267842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.267910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.267925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.278754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.278818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.278832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.289918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.290000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.290014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.301911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.301975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.301988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.313918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.313984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.313999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.326329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.326393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.326406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.336477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.691 [2024-04-18 15:13:29.336551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.691 [2024-04-18 15:13:29.336566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.691 [2024-04-18 15:13:29.350002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.692 [2024-04-18 15:13:29.350075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.692 [2024-04-18 15:13:29.350091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.692 [2024-04-18 15:13:29.361721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.692 [2024-04-18 15:13:29.361789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.692 [2024-04-18 15:13:29.361802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.692 [2024-04-18 15:13:29.373852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.692 [2024-04-18 15:13:29.373927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.692 [2024-04-18 15:13:29.373941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.692 [2024-04-18 15:13:29.383806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.692 [2024-04-18 15:13:29.383879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.692 [2024-04-18 15:13:29.383892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.692 [2024-04-18 15:13:29.395983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.692 [2024-04-18 15:13:29.396045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.692 [2024-04-18 15:13:29.396060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.407520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.407588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.407602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.417725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.417780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.417793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.428154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.428211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.428224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.440256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.440319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.440332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.451636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.451698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.451712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.461611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.461670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.461683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.472663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.472720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.472734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.483381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.483442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.483455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.493397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.493455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.493468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.505223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.505277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.505290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.516892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.516956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.516968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.528206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.528260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.528273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.538093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.538145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.538159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.549470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.549521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.549535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.562578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.562637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.562651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.572308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.572366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.572380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.582381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.582436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.582449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.595197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.595259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.595273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.606107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.606165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.606178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.617752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.617809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.617823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.627018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.627072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.627085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.637940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.637993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.638006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.951 [2024-04-18 15:13:29.648902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:13.951 [2024-04-18 15:13:29.648956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:13.951 [2024-04-18 15:13:29.648969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.211 [2024-04-18 15:13:29.659184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.211 [2024-04-18 15:13:29.659242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.211 [2024-04-18 15:13:29.659255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.211 [2024-04-18 15:13:29.671033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.211 [2024-04-18 15:13:29.671088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.211 [2024-04-18 15:13:29.671100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.211 [2024-04-18 15:13:29.682128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.211 [2024-04-18 15:13:29.682185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.211 [2024-04-18 15:13:29.682198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.211 [2024-04-18 15:13:29.692259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.211 [2024-04-18 15:13:29.692310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.211 [2024-04-18 15:13:29.692323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.211 [2024-04-18 15:13:29.702549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.211 [2024-04-18 15:13:29.702600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.211 [2024-04-18 15:13:29.702614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.211 [2024-04-18 15:13:29.714053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.211 [2024-04-18 15:13:29.714120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.714135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.724668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.724725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.724739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.736429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.736486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.736501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.747503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.747570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.747583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.758718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.758797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.758818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.766756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.766810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.766823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.779459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.779516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.779528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.789214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.789268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.789281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.801392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.801447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.801460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.810884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.810942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.810955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.821447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.821501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.821513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.831321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.831374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.831388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.842696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.842749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.842761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.853697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.853749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.853764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.863788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.863844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.863856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.875171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.875231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.875244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.884323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.884377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.884390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.895531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.895597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.895610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.906033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.906090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.906104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.212 [2024-04-18 15:13:29.916619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.212 [2024-04-18 15:13:29.916679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.212 [2024-04-18 15:13:29.916691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:29.927358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:29.927423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:29.927436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:29.936450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:29.936512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:29.936524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:29.948299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:29.948360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:29.948374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:29.957577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:29.957633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:29.957646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:29.967420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:29.967477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:29.967492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:29.979729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:29.979783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:29.979797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:29.990850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:29.990905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:29.990918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:29.998810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:29.998861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:29.998874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:30.011764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:30.011842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:30.011861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:30.023239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:30.023321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:30.023342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:30.036344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:30.036431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:30.036450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:30.049519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:30.049603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:30.049618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:30.060860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:30.060927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:30.060940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:30.071303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:30.071371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:30.071386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:30.083666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:30.083731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:30.083745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:30.093720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:30.093785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:30.093799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:30.104792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:30.104855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:30.104870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.472 [2024-04-18 15:13:30.113629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.472 [2024-04-18 15:13:30.113692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.472 [2024-04-18 15:13:30.113706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.473 [2024-04-18 15:13:30.124410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.473 [2024-04-18 15:13:30.124478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-04-18 15:13:30.124491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.473 [2024-04-18 15:13:30.135810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.473 [2024-04-18 15:13:30.135867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-04-18 15:13:30.135879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.473 [2024-04-18 15:13:30.147213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.473 [2024-04-18 15:13:30.147271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-04-18 15:13:30.147284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.473 [2024-04-18 15:13:30.158532] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.473 [2024-04-18 15:13:30.158602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-04-18 15:13:30.158615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.473 [2024-04-18 15:13:30.167667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.473 [2024-04-18 15:13:30.167725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.473 [2024-04-18 15:13:30.167738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.732 [2024-04-18 15:13:30.178960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.179020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.179033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.190193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.190254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.190269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.200960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.201019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.201032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.211963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.212024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.212038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.222655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.222718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.222732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.232726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.232780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.232793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.242535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.242606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.242618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.254276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.254342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.254355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.265098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.265154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.265167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.276592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.276652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.276665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.286718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.286778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.286791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.296147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.296206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.296219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.307145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.307206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.307220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.317039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.317101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.317114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.328157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.328223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.328238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.339716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.339778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.339790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.351955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.352019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.352032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.361666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.361721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.361733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.373009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.373069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.373081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.384881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.384940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.384953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.395584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.395642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.395655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.404634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.404682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.404694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.415937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.415985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.415998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.427048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.427100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.427114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.733 [2024-04-18 15:13:30.438366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.733 [2024-04-18 15:13:30.438415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.733 [2024-04-18 15:13:30.438428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.992 [2024-04-18 15:13:30.449245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.992 [2024-04-18 15:13:30.449304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.992 [2024-04-18 15:13:30.449318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.992 [2024-04-18 15:13:30.459290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.992 [2024-04-18 15:13:30.459355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.992 [2024-04-18 15:13:30.459369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.992 [2024-04-18 15:13:30.471023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.992 [2024-04-18 15:13:30.471086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.471099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 [2024-04-18 15:13:30.481972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.993 [2024-04-18 15:13:30.482032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.482044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 [2024-04-18 15:13:30.491597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.993 [2024-04-18 15:13:30.491653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.491666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 [2024-04-18 15:13:30.503778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.993 [2024-04-18 15:13:30.503834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.503847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 [2024-04-18 15:13:30.514389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.993 [2024-04-18 15:13:30.514456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.514470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 [2024-04-18 15:13:30.523631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.993 [2024-04-18 15:13:30.523695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.523708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 [2024-04-18 15:13:30.535174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.993 [2024-04-18 15:13:30.535235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.535247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 [2024-04-18 15:13:30.546393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.993 [2024-04-18 15:13:30.546451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.546465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 [2024-04-18 15:13:30.557171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.993 [2024-04-18 15:13:30.557221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.557234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 [2024-04-18 15:13:30.568449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.993 [2024-04-18 15:13:30.568502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.568516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 [2024-04-18 15:13:30.579317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.993 [2024-04-18 15:13:30.579375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.579389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 [2024-04-18 15:13:30.589675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.993 [2024-04-18 15:13:30.589737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.589750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 [2024-04-18 15:13:30.601088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.993 [2024-04-18 15:13:30.601149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.601163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 [2024-04-18 15:13:30.611751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.993 [2024-04-18 15:13:30.611809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.611822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 [2024-04-18 15:13:30.621462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.993 [2024-04-18 15:13:30.621528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.621558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 [2024-04-18 15:13:30.631860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1259b40) 00:26:14.993 [2024-04-18 15:13:30.631925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.993 [2024-04-18 15:13:30.631937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.993 00:26:14.993 Latency(us) 00:26:14.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.993 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:14.993 nvme0n1 : 2.00 22821.97 89.15 0.00 0.00 5603.20 2592.49 17055.15 00:26:14.993 =================================================================================================================== 00:26:14.993 Total : 22821.97 89.15 0.00 0.00 5603.20 2592.49 17055.15 00:26:14.993 0 00:26:14.993 15:13:30 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:14.993 15:13:30 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:14.993 15:13:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:14.993 15:13:30 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:14.993 | .driver_specific 00:26:14.993 | .nvme_error 00:26:14.993 | .status_code 00:26:14.993 | .command_transient_transport_error' 00:26:15.252 15:13:30 -- host/digest.sh@71 -- # (( 179 > 0 )) 00:26:15.252 15:13:30 -- host/digest.sh@73 -- # killprocess 85675 00:26:15.252 15:13:30 -- common/autotest_common.sh@936 -- # '[' -z 85675 ']' 00:26:15.252 15:13:30 -- common/autotest_common.sh@940 -- # kill -0 85675 00:26:15.252 15:13:30 -- common/autotest_common.sh@941 -- # uname 00:26:15.252 15:13:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:15.252 15:13:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85675 00:26:15.252 killing process with pid 85675 00:26:15.252 Received shutdown signal, test time was about 2.000000 seconds 00:26:15.252 00:26:15.252 Latency(us) 00:26:15.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.252 =================================================================================================================== 00:26:15.252 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:15.252 15:13:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:15.252 15:13:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:15.252 15:13:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85675' 00:26:15.252 15:13:30 -- common/autotest_common.sh@955 -- # kill 85675 00:26:15.252 15:13:30 -- common/autotest_common.sh@960 -- # wait 85675 00:26:15.511 15:13:31 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:15.511 15:13:31 -- host/digest.sh@54 -- # local rw bs qd 00:26:15.511 15:13:31 -- host/digest.sh@56 -- # rw=randread 00:26:15.511 15:13:31 -- host/digest.sh@56 -- # bs=131072 00:26:15.511 15:13:31 -- host/digest.sh@56 -- # qd=16 00:26:15.511 15:13:31 -- host/digest.sh@58 -- # bperfpid=85765 00:26:15.511 15:13:31 -- host/digest.sh@60 -- # waitforlisten 85765 /var/tmp/bperf.sock 00:26:15.512 15:13:31 -- common/autotest_common.sh@817 -- # '[' -z 85765 ']' 00:26:15.512 15:13:31 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:15.512 15:13:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:15.512 15:13:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:15.512 15:13:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:15.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:15.512 15:13:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:15.512 15:13:31 -- common/autotest_common.sh@10 -- # set +x 00:26:15.512 [2024-04-18 15:13:31.177322] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:26:15.512 [2024-04-18 15:13:31.177860] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85765 ] 00:26:15.512 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:15.512 Zero copy mechanism will not be used. 00:26:15.770 [2024-04-18 15:13:31.320061] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.770 [2024-04-18 15:13:31.420486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.716 15:13:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:16.716 15:13:32 -- common/autotest_common.sh@850 -- # return 0 00:26:16.716 15:13:32 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:16.716 15:13:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:16.716 15:13:32 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:16.716 15:13:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.716 15:13:32 -- common/autotest_common.sh@10 -- # set +x 00:26:16.716 15:13:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.716 15:13:32 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:16.716 15:13:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:16.975 nvme0n1 00:26:16.975 15:13:32 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:16.975 15:13:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.975 15:13:32 -- common/autotest_common.sh@10 -- # set +x 00:26:16.975 15:13:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.975 15:13:32 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:16.975 15:13:32 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:16.975 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:16.975 Zero copy mechanism will not be used. 00:26:16.975 Running I/O for 2 seconds... 00:26:17.235 [2024-04-18 15:13:32.686617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.235 [2024-04-18 15:13:32.686696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.235 [2024-04-18 15:13:32.686711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.235 [2024-04-18 15:13:32.691006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.235 [2024-04-18 15:13:32.691071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.235 [2024-04-18 15:13:32.691084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.235 [2024-04-18 15:13:32.695385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.235 [2024-04-18 15:13:32.695437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.235 [2024-04-18 15:13:32.695450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.235 [2024-04-18 15:13:32.699050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.235 [2024-04-18 15:13:32.699100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.235 [2024-04-18 15:13:32.699113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.235 [2024-04-18 15:13:32.701239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.235 [2024-04-18 15:13:32.701276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.235 [2024-04-18 15:13:32.701288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.235 [2024-04-18 15:13:32.705303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.235 [2024-04-18 15:13:32.705351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.235 [2024-04-18 15:13:32.705362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.235 [2024-04-18 15:13:32.709183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.235 [2024-04-18 15:13:32.709228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.235 [2024-04-18 15:13:32.709240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.235 [2024-04-18 15:13:32.713199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.235 [2024-04-18 15:13:32.713243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.235 [2024-04-18 15:13:32.713255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.235 [2024-04-18 15:13:32.715292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.235 [2024-04-18 15:13:32.715332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.235 [2024-04-18 15:13:32.715343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.235 [2024-04-18 15:13:32.719193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.235 [2024-04-18 15:13:32.719233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.235 [2024-04-18 15:13:32.719246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.235 [2024-04-18 15:13:32.722471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.235 [2024-04-18 15:13:32.722511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.722523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.725255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.725292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.725303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.728894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.728933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.728945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.732088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.732125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.732136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.734841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.734879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.734890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.737980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.738019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.738030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.741932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.741971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.741983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.745463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.745499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.745511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.748009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.748047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.748058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.751774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.751812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.751823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.755814] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.755857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.755869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.759148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.759188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.759199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.762006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.762044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.762055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.765347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.765383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.765394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.768478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.768517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.768528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.771718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.771758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.771770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.775376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.775415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.775426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.779558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.779602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.779614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.782143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.782184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.782196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.785840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.785888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.785900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.789785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.789824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.789836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.793312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.793348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.793359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.795866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.795905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.795916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.799683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.799723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.799734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.803150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.803189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.803200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.806528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.806579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.806590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.809483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.809520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.236 [2024-04-18 15:13:32.809531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.236 [2024-04-18 15:13:32.813392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.236 [2024-04-18 15:13:32.813429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.813440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.816377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.816414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.816425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.819720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.819759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.819770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.823168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.823207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.823218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.825914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.825951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.825962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.829450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.829489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.829500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.833091] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.833129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.833140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.836714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.836751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.836763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.838965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.839001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.839012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.842608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.842645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.842656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.846403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.846443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.846454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.850271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.850316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.850327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.852670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.852704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.852715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.856003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.856041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.856052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.860099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.860143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.860154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.863821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.863881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.863895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.867426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.867466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.867478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.870013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.870053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.870065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.874163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.874205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.874217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.877866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.877920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.877932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.880288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.880325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.880337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.884776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.884818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.884830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.888259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.888299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.888311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.890512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.890563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.890574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.894418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.894458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.894470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.898606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.898647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.898659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.902448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.237 [2024-04-18 15:13:32.902488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.237 [2024-04-18 15:13:32.902499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.237 [2024-04-18 15:13:32.905107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.238 [2024-04-18 15:13:32.905141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.238 [2024-04-18 15:13:32.905152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.238 [2024-04-18 15:13:32.908184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.238 [2024-04-18 15:13:32.908221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.238 [2024-04-18 15:13:32.908232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.238 [2024-04-18 15:13:32.912211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.238 [2024-04-18 15:13:32.912252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.238 [2024-04-18 15:13:32.912264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.238 [2024-04-18 15:13:32.915424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.238 [2024-04-18 15:13:32.915466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.238 [2024-04-18 15:13:32.915478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.238 [2024-04-18 15:13:32.918501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.238 [2024-04-18 15:13:32.918557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.238 [2024-04-18 15:13:32.918569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.238 [2024-04-18 15:13:32.922259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.238 [2024-04-18 15:13:32.922302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.238 [2024-04-18 15:13:32.922314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.238 [2024-04-18 15:13:32.925079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.238 [2024-04-18 15:13:32.925115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.238 [2024-04-18 15:13:32.925126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.238 [2024-04-18 15:13:32.928646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.238 [2024-04-18 15:13:32.928687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.238 [2024-04-18 15:13:32.928698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.238 [2024-04-18 15:13:32.932013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.238 [2024-04-18 15:13:32.932052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.238 [2024-04-18 15:13:32.932063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.238 [2024-04-18 15:13:32.934831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.238 [2024-04-18 15:13:32.934871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.238 [2024-04-18 15:13:32.934882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.238 [2024-04-18 15:13:32.937408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.238 [2024-04-18 15:13:32.937444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.238 [2024-04-18 15:13:32.937456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.498 [2024-04-18 15:13:32.940770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.498 [2024-04-18 15:13:32.940808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.498 [2024-04-18 15:13:32.940819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.498 [2024-04-18 15:13:32.944607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.498 [2024-04-18 15:13:32.944645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.498 [2024-04-18 15:13:32.944656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.498 [2024-04-18 15:13:32.947689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.498 [2024-04-18 15:13:32.947728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.498 [2024-04-18 15:13:32.947739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.498 [2024-04-18 15:13:32.950549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.498 [2024-04-18 15:13:32.950585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.498 [2024-04-18 15:13:32.950596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.498 [2024-04-18 15:13:32.954478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.498 [2024-04-18 15:13:32.954521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.498 [2024-04-18 15:13:32.954532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.498 [2024-04-18 15:13:32.957281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:32.957314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:32.957325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:32.960475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:32.960513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:32.960525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:32.963846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:32.963885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:32.963896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:32.967353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:32.967393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:32.967405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:32.970116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:32.970152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:32.970164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:32.973681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:32.973717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:32.973729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:32.977647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:32.977686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:32.977698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:32.980072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:32.980108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:32.980119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:32.983277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:32.983317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:32.983328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:32.986821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:32.986860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:32.986871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:32.990436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:32.990470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:32.990481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:32.994100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:32.994131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:32.994142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:32.996199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:32.996229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:32.996240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:33.000101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:33.000133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:33.000145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:33.002942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:33.002973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:33.002984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:33.006055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:33.006088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:33.006100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:33.009921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:33.009957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:33.009968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:33.013439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:33.013471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:33.013482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:33.016306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:33.016337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:33.016348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:33.019617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:33.019648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:33.019658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:33.023461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:33.023494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:33.023505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:33.026846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:33.026878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:33.026889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:33.029045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:33.029075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:33.029085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:33.033114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:33.033147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:33.033158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:33.036942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:33.036976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:33.036987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:33.040850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:33.040882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.499 [2024-04-18 15:13:33.040893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.499 [2024-04-18 15:13:33.043212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.499 [2024-04-18 15:13:33.043244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.043256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.047670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.047702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.047713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.050820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.050854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.050866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.054318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.054351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.054362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.056982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.057013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.057024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.060431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.060462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.060472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.063835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.063867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.063878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.066502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.066534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.066556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.070098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.070129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.070140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.073420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.073450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.073461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.076078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.076108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.076119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.079499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.079531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.079556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.083077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.083108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.083119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.086110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.086141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.086152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.089433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.089464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.089475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.092483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.092514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.092525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.095591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.095621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.095632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.098723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.098755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.098766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.101578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.101606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.101617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.104565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.104593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.104604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.107219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.107253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.107265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.110705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.110737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.110748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.114350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.114383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.114395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.117029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.117060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.117071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.120550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.120580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.120592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.124368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.124401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.124412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.127815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.127847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.500 [2024-04-18 15:13:33.127857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.500 [2024-04-18 15:13:33.130301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.500 [2024-04-18 15:13:33.130332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.130343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.133360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.133391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.133402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.136335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.136367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.136378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.139638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.139669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.139680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.142747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.142782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.142793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.146052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.146082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.146093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.149466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.149496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.149507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.153013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.153046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.153057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.155637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.155667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.155679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.159825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.159860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.159870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.162373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.162405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.162416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.165520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.165560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.165572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.169783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.169816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.169827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.173587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.173617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.173628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.176130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.176161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.176172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.180070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.180103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.180114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.184421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.184455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.184466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.188152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.188190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.188202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.191724] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.191758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.191769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.194232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.194264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.194275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.197970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.198005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.198016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.501 [2024-04-18 15:13:33.201490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.501 [2024-04-18 15:13:33.201524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.501 [2024-04-18 15:13:33.201549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.760 [2024-04-18 15:13:33.204502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.204534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.204560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.207525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.207567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.207578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.210770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.210802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.210813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.214253] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.214288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.214299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.217255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.217288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.217299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.220498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.220530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.220557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.224054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.224087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.224099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.226731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.226761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.226773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.230401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.230436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.230448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.234180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.234216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.234228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.237974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.238009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.238020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.240844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.240875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.240886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.245266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.245300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.245311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.248930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.248963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.248975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.251482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.251513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.251524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.255977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.256026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.256038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.260492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.260529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.260555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.263263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.263294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.263305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.266808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.266842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.266854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.270483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.270515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.270526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.273177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.273208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.273219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.276903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.276934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.276945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.280579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.280609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.280619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.284497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.284529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.284553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.286903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.286932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.286943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.290136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.290170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.290182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.293763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.293794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.293806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.297953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.297987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.297998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.301320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.301350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.301362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.303670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.303699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.303709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.307143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.307176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.307187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.310334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.310365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.310376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.313234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.313266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.313277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.316583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.316613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.316625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.320321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.320352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.320364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.323964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.323995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.324005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.327544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.327585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.327597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.329915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.329946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.329956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.333624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.333662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.333673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.336513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.336555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.336567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.340057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.340089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.340100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.343481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.343514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.343526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.346060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.346089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.346100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.349359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.349391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.349402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.353482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.353515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.353527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.357484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.357518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.357529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.361518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.361561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.361573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.363838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.761 [2024-04-18 15:13:33.363871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.761 [2024-04-18 15:13:33.363882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.761 [2024-04-18 15:13:33.368030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.368063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.368074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.370478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.370510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.370521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.374245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.374279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.374290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.377860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.377905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.377917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.381654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.381688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.381700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.384294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.384325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.384336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.387913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.387945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.387956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.391979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.392011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.392022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.395602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.395631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.395642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.399189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.399220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.399231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.401522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.401561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.401572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.405374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.405404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.405415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.408808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.408838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.408849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.412097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.412131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.412143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.415444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.415477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.415488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.418323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.418353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.418365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.421514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.421556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.421567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.425038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.425069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.425081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.428764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.428795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.428806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.432006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.432038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.432049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.435027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.435058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.435069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.438142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.438177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.438189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.441203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.441236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.441248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.444851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.444887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.444898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.448300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.448337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.448350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.450950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.450983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.450994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.454929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.454966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.454978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.458211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.458245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.458256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.460611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.460641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.460652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.762 [2024-04-18 15:13:33.463984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:17.762 [2024-04-18 15:13:33.464018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.762 [2024-04-18 15:13:33.464029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.023 [2024-04-18 15:13:33.467788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.023 [2024-04-18 15:13:33.467821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.023 [2024-04-18 15:13:33.467833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.023 [2024-04-18 15:13:33.470109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.023 [2024-04-18 15:13:33.470142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.023 [2024-04-18 15:13:33.470154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.023 [2024-04-18 15:13:33.473286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.023 [2024-04-18 15:13:33.473319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.023 [2024-04-18 15:13:33.473330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.023 [2024-04-18 15:13:33.476884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.023 [2024-04-18 15:13:33.476916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.023 [2024-04-18 15:13:33.476927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.023 [2024-04-18 15:13:33.480141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.023 [2024-04-18 15:13:33.480173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.023 [2024-04-18 15:13:33.480184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.023 [2024-04-18 15:13:33.483077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.023 [2024-04-18 15:13:33.483110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.023 [2024-04-18 15:13:33.483121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.023 [2024-04-18 15:13:33.486011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.023 [2024-04-18 15:13:33.486044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.023 [2024-04-18 15:13:33.486055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.023 [2024-04-18 15:13:33.489631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.023 [2024-04-18 15:13:33.489664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.023 [2024-04-18 15:13:33.489675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.023 [2024-04-18 15:13:33.493617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.023 [2024-04-18 15:13:33.493649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.023 [2024-04-18 15:13:33.493661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.023 [2024-04-18 15:13:33.496134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.496165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.496175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.499625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.499656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.499667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.502387] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.502437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.502448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.506296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.506331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.506342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.510029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.510065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.510076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.512249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.512279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.512290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.516356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.516390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.516400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.520277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.520310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.520321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.522962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.522991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.523002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.526647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.526676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.526687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.530586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.530617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.530629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.533241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.533274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.533285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.536723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.536755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.536766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.540083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.540114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.540125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.543198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.543229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.543240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.546588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.546618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.546629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.549203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.549234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.549245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.552629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.552660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.552671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.556841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.556877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.556889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.561174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.561209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.561220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.564224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.564255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.564266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.567733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.567764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.567775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.571856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.571890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.571902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.574556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.574597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.574609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.577897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.577927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.577938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.581531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.581572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.581583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.585772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.024 [2024-04-18 15:13:33.585805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.024 [2024-04-18 15:13:33.585816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.024 [2024-04-18 15:13:33.588718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.588749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.588759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.591919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.591951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.591962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.595984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.596019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.596031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.598814] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.598849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.598861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.602712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.602746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.602758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.606932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.606966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.606977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.609663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.609696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.609707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.613164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.613198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.613209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.617089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.617120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.617131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.620236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.620268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.620279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.623669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.623707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.623718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.627007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.627040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.627051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.631259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.631297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.631309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.633889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.633936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.633949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.637707] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.637740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.637751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.640857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.640890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.640902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.644584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.644613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.644624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.647874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.647906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.647919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.651046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.651080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.651092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.654422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.654455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.654467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.657979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.658011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.658022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.661526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.661568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.661580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.664542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.664589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.664601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.668435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.668469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.668481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.672505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.672548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.672560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.675154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.675202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.675214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.679306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.025 [2024-04-18 15:13:33.679340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.025 [2024-04-18 15:13:33.679352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.025 [2024-04-18 15:13:33.683482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.026 [2024-04-18 15:13:33.683516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.026 [2024-04-18 15:13:33.683527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.026 [2024-04-18 15:13:33.687186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.026 [2024-04-18 15:13:33.687217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.026 [2024-04-18 15:13:33.687228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.026 [2024-04-18 15:13:33.689319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.026 [2024-04-18 15:13:33.689348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.026 [2024-04-18 15:13:33.689360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.026 [2024-04-18 15:13:33.693348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.026 [2024-04-18 15:13:33.693382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.026 [2024-04-18 15:13:33.693393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.026 [2024-04-18 15:13:33.697510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.026 [2024-04-18 15:13:33.697554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.026 [2024-04-18 15:13:33.697566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.026 [2024-04-18 15:13:33.700427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.026 [2024-04-18 15:13:33.700459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.026 [2024-04-18 15:13:33.700470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.026 [2024-04-18 15:13:33.703662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.026 [2024-04-18 15:13:33.703694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.026 [2024-04-18 15:13:33.703705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.026 [2024-04-18 15:13:33.707415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.026 [2024-04-18 15:13:33.707448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.026 [2024-04-18 15:13:33.707459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.026 [2024-04-18 15:13:33.711316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.026 [2024-04-18 15:13:33.711349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.026 [2024-04-18 15:13:33.711360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.026 [2024-04-18 15:13:33.715263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.026 [2024-04-18 15:13:33.715297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.026 [2024-04-18 15:13:33.715308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.026 [2024-04-18 15:13:33.717867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.026 [2024-04-18 15:13:33.717912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.026 [2024-04-18 15:13:33.717923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.026 [2024-04-18 15:13:33.721724] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.026 [2024-04-18 15:13:33.721756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.026 [2024-04-18 15:13:33.721767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.026 [2024-04-18 15:13:33.725007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.026 [2024-04-18 15:13:33.725039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.026 [2024-04-18 15:13:33.725050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.728205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.330 [2024-04-18 15:13:33.728239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.330 [2024-04-18 15:13:33.728250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.731745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.330 [2024-04-18 15:13:33.731778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.330 [2024-04-18 15:13:33.731790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.734487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.330 [2024-04-18 15:13:33.734520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.330 [2024-04-18 15:13:33.734532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.737856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.330 [2024-04-18 15:13:33.737918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.330 [2024-04-18 15:13:33.737942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.741289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.330 [2024-04-18 15:13:33.741321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.330 [2024-04-18 15:13:33.741332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.745212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.330 [2024-04-18 15:13:33.745248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.330 [2024-04-18 15:13:33.745259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.747786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.330 [2024-04-18 15:13:33.747817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.330 [2024-04-18 15:13:33.747828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.751341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.330 [2024-04-18 15:13:33.751376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.330 [2024-04-18 15:13:33.751387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.755180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.330 [2024-04-18 15:13:33.755213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.330 [2024-04-18 15:13:33.755225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.759498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.330 [2024-04-18 15:13:33.759533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.330 [2024-04-18 15:13:33.759554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.763171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.330 [2024-04-18 15:13:33.763205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.330 [2024-04-18 15:13:33.763216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.765361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.330 [2024-04-18 15:13:33.765392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.330 [2024-04-18 15:13:33.765402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.769914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.330 [2024-04-18 15:13:33.769948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.330 [2024-04-18 15:13:33.769960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.774205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.330 [2024-04-18 15:13:33.774240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.330 [2024-04-18 15:13:33.774251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.777712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.330 [2024-04-18 15:13:33.777744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.330 [2024-04-18 15:13:33.777756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.330 [2024-04-18 15:13:33.780181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.780213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.780224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.784144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.784178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.784189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.787977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.788010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.788022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.790641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.790672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.790684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.794525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.794575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.794588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.797899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.797948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.797960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.801273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.801304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.801315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.803995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.804044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.804055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.807777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.807808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.807819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.810889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.810923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.810934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.814140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.814173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.814184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.817553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.817582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.817593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.820684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.820715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.820726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.824076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.824108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.824119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.827227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.827260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.827271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.830937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.830969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.830981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.833859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.833919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.833930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.838016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.838050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.838061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.841275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.841307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.841318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.843908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.843956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.843968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.848205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.848239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.848250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.851909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.851941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.851952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.854571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.854614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.854626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.858017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.858048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.858060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.862086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.862119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.862131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.865781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.865812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.865824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.868792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.331 [2024-04-18 15:13:33.868823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.331 [2024-04-18 15:13:33.868835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.331 [2024-04-18 15:13:33.872486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.872517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.872528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.876558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.876588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.876599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.880519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.880560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.880571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.884097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.884129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.884140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.886208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.886239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.886250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.889526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.889568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.889580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.893018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.893050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.893061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.896337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.896370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.896382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.899959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.899992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.900003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.903086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.903119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.903130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.906361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.906409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.906421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.909317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.909347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.909358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.912860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.912893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.912905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.917142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.917175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.917186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.921426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.921461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.921472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.925445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.925479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.925490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.927704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.927732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.927743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.932008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.932041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.932052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.935912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.935943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.935954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.938484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.938517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.938529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.942072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.942105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.942116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.945868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.945931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.945944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.950224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.950260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.950271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.952956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.952987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.952998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.956194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.956230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.956241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.960190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.960226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.960238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.964236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.964270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.332 [2024-04-18 15:13:33.964281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.332 [2024-04-18 15:13:33.966686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.332 [2024-04-18 15:13:33.966716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:33.966727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:33.970476] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:33.970510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:33.970521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:33.974320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:33.974358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:33.974370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:33.977146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:33.977179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:33.977190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:33.980362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:33.980396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:33.980408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:33.983465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:33.983498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:33.983509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:33.987286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:33.987319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:33.987331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:33.990819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:33.990855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:33.990867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:33.993761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:33.993794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:33.993804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:33.997255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:33.997286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:33.997298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:34.000788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:34.000822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:34.000833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:34.003612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:34.003645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:34.003657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:34.007365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:34.007398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:34.007409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:34.011272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:34.011307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:34.011318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:34.014987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:34.015020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:34.015031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:34.017383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:34.017415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:34.017426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:34.021341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:34.021376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:34.021387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:34.024737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:34.024774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:34.024785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:34.028232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:34.028267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:34.028279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.333 [2024-04-18 15:13:34.031075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.333 [2024-04-18 15:13:34.031107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.333 [2024-04-18 15:13:34.031118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.594 [2024-04-18 15:13:34.034442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.594 [2024-04-18 15:13:34.034482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.594 [2024-04-18 15:13:34.034494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.594 [2024-04-18 15:13:34.038459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.594 [2024-04-18 15:13:34.038498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.594 [2024-04-18 15:13:34.038510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.594 [2024-04-18 15:13:34.042294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.594 [2024-04-18 15:13:34.042334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.042347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.044642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.044673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.044685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.048589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.048621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.048633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.052650] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.052683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.052696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.055383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.055418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.055428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.058628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.058664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.058677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.062185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.062221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.062233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.065595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.065624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.065635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.069487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.069524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.069548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.072373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.072405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.072416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.075871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.075905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.075917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.079883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.079915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.079926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.083706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.083739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.083750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.087484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.087517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.087529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.089616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.089644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.089655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.093853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.093902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.093914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.096824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.096855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.096867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.100145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.100180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.100191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.103822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.103857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.103868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.107847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.107881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.107893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.110962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.111013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.111024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.114462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.114497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.114509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.117273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.117306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.117317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.121092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.121127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.121138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.125709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.125745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.125757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.129847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.129928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.129941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.132946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.595 [2024-04-18 15:13:34.132978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.595 [2024-04-18 15:13:34.132990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.595 [2024-04-18 15:13:34.136806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.136836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.136849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.141442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.141483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.141495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.145846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.145917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.145930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.150030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.150068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.150080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.152375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.152409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.152420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.156794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.156832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.156843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.161143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.161180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.161192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.163917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.163951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.163962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.167672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.167700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.167712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.171590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.171630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.171645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.175881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.175919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.175931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.179673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.179701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.179713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.182407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.182438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.182450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.186126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.186160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.186172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.190097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.190130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.190142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.194136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.194171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.194182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.197104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.197137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.197149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.200601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.200642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.200653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.204568] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.204602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.204614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.208597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.208630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.208641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.210777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.210823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.210851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.215148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.215187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.215198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.219245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.219282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.219294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.222175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.222207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.222218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.225721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.225754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.225765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.229803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.229853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.229865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.596 [2024-04-18 15:13:34.233807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.596 [2024-04-18 15:13:34.233839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.596 [2024-04-18 15:13:34.233850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.238103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.238138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.238150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.240473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.240503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.240515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.244136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.244167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.244177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.248013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.248048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.248060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.251819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.251853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.251864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.255416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.255452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.255463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.259151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.259186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.259198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.262214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.262249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.262261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.266531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.266576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.266589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.270528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.270575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.270587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.273417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.273450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.273461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.277062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.277097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.277108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.280978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.281013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.281024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.283504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.283547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.283558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.287641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.287672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.287683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.291758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.291793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.291804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.295035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.295068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.295080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.597 [2024-04-18 15:13:34.298077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.597 [2024-04-18 15:13:34.298112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.597 [2024-04-18 15:13:34.298124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.857 [2024-04-18 15:13:34.302221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.857 [2024-04-18 15:13:34.302255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.857 [2024-04-18 15:13:34.302266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.857 [2024-04-18 15:13:34.306358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.857 [2024-04-18 15:13:34.306393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.857 [2024-04-18 15:13:34.306405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.857 [2024-04-18 15:13:34.310230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.857 [2024-04-18 15:13:34.310264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.857 [2024-04-18 15:13:34.310276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.857 [2024-04-18 15:13:34.312892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.857 [2024-04-18 15:13:34.312922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.857 [2024-04-18 15:13:34.312933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.857 [2024-04-18 15:13:34.316148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.857 [2024-04-18 15:13:34.316180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.857 [2024-04-18 15:13:34.316191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.857 [2024-04-18 15:13:34.320074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.857 [2024-04-18 15:13:34.320107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.857 [2024-04-18 15:13:34.320118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.857 [2024-04-18 15:13:34.323826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.857 [2024-04-18 15:13:34.323860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.857 [2024-04-18 15:13:34.323872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.857 [2024-04-18 15:13:34.326895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.857 [2024-04-18 15:13:34.326930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.857 [2024-04-18 15:13:34.326954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.857 [2024-04-18 15:13:34.330518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.857 [2024-04-18 15:13:34.330577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.857 [2024-04-18 15:13:34.330589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.857 [2024-04-18 15:13:34.334187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.857 [2024-04-18 15:13:34.334220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.334232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.336531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.336581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.336594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.340286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.340326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.340338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.344513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.344570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.344584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.348535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.348590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.348603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.350798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.350833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.350845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.354811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.354848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.354859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.359365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.359417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.359429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.362451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.362484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.362496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.366034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.366081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.366092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.370053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.370087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.370099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.373936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.373970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.373983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.376375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.376405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.376416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.380188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.380222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.380233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.384393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.384428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.384439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.388734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.388770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.388781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.391688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.391721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.391732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.395460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.395495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.395505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.398518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.398561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.398573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.402837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.402873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.402885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.405677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.405709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.405720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.409355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.409389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.409401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.413413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.413446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.413457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.417483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.417518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.417530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.420394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.420428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.420439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.424214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.424248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.424259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.428043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.428076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.858 [2024-04-18 15:13:34.428087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.858 [2024-04-18 15:13:34.432124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.858 [2024-04-18 15:13:34.432159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.432170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.434833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.434867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.434879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.438488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.438522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.438534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.441727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.441760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.441771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.444573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.444604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.444615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.448255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.448290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.448301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.451690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.451739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.451750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.454627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.454659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.454671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.458518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.458564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.458576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.462359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.462392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.462404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.466049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.466082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.466094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.468479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.468511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.468522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.472540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.472586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.472598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.475325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.475357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.475369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.478833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.478867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.478879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.483083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.483117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.483128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.486901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.486935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.486946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.489417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.489449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.489460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.493465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.493498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.493510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.497218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.497269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.497280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.501086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.501119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.501130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.503954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.503983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.503994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.507138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.507170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.507181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.511087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.511120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.511131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.513696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.513728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.513740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.517142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.517189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.517201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.520023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.520057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.859 [2024-04-18 15:13:34.520069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.859 [2024-04-18 15:13:34.523703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.859 [2024-04-18 15:13:34.523735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.860 [2024-04-18 15:13:34.523747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.860 [2024-04-18 15:13:34.527506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.860 [2024-04-18 15:13:34.527557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.860 [2024-04-18 15:13:34.527585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.860 [2024-04-18 15:13:34.530489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.860 [2024-04-18 15:13:34.530523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.860 [2024-04-18 15:13:34.530549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.860 [2024-04-18 15:13:34.533606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.860 [2024-04-18 15:13:34.533639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.860 [2024-04-18 15:13:34.533650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.860 [2024-04-18 15:13:34.537163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.860 [2024-04-18 15:13:34.537201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.860 [2024-04-18 15:13:34.537213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.860 [2024-04-18 15:13:34.540861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.860 [2024-04-18 15:13:34.540899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.860 [2024-04-18 15:13:34.540910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.860 [2024-04-18 15:13:34.543917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.860 [2024-04-18 15:13:34.543949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.860 [2024-04-18 15:13:34.543960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.860 [2024-04-18 15:13:34.547297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.860 [2024-04-18 15:13:34.547329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.860 [2024-04-18 15:13:34.547340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.860 [2024-04-18 15:13:34.551416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.860 [2024-04-18 15:13:34.551452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.860 [2024-04-18 15:13:34.551464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.860 [2024-04-18 15:13:34.555200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.860 [2024-04-18 15:13:34.555236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.860 [2024-04-18 15:13:34.555249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.860 [2024-04-18 15:13:34.557804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.860 [2024-04-18 15:13:34.557834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.860 [2024-04-18 15:13:34.557846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.860 [2024-04-18 15:13:34.561236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:18.860 [2024-04-18 15:13:34.561270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.860 [2024-04-18 15:13:34.561282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.564837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.564870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.564882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.569000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.569036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.569048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.572507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.572550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.572562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.575575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.575610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.575622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.579800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.579836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.579848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.583937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.583973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.583984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.588074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.588110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.588122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.590636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.590668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.590680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.594081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.594117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.594128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.598259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.598295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.598307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.601346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.601379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.601391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.604908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.604941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.604952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.608303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.608348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.608360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.611355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.611405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.611416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.614978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.615012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.615023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.618358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.618392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.618404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.621495] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.621526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.621547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.624477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.624510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.624521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.628015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.628063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.120 [2024-04-18 15:13:34.628075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.120 [2024-04-18 15:13:34.631488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.120 [2024-04-18 15:13:34.631522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.121 [2024-04-18 15:13:34.631534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.121 [2024-04-18 15:13:34.634861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.121 [2024-04-18 15:13:34.634894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.121 [2024-04-18 15:13:34.634905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.121 [2024-04-18 15:13:34.637445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.121 [2024-04-18 15:13:34.637477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.121 [2024-04-18 15:13:34.637489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.121 [2024-04-18 15:13:34.640983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.121 [2024-04-18 15:13:34.641028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.121 [2024-04-18 15:13:34.641039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.121 [2024-04-18 15:13:34.644325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.121 [2024-04-18 15:13:34.644359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.121 [2024-04-18 15:13:34.644371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.121 [2024-04-18 15:13:34.647341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.121 [2024-04-18 15:13:34.647374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.121 [2024-04-18 15:13:34.647385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.121 [2024-04-18 15:13:34.651398] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.121 [2024-04-18 15:13:34.651433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.121 [2024-04-18 15:13:34.651445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.121 [2024-04-18 15:13:34.655696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.121 [2024-04-18 15:13:34.655730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.121 [2024-04-18 15:13:34.655741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.121 [2024-04-18 15:13:34.658387] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.121 [2024-04-18 15:13:34.658419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.121 [2024-04-18 15:13:34.658430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.121 [2024-04-18 15:13:34.662023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.121 [2024-04-18 15:13:34.662057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.121 [2024-04-18 15:13:34.662069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.121 [2024-04-18 15:13:34.665806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.121 [2024-04-18 15:13:34.665838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.121 [2024-04-18 15:13:34.665849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.121 [2024-04-18 15:13:34.668764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.121 [2024-04-18 15:13:34.668796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.121 [2024-04-18 15:13:34.668807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.121 [2024-04-18 15:13:34.671904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.121 [2024-04-18 15:13:34.671936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.121 [2024-04-18 15:13:34.671947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.121 [2024-04-18 15:13:34.675430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x60c8a0) 00:26:19.121 [2024-04-18 15:13:34.675465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.121 [2024-04-18 15:13:34.675477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.121 00:26:19.121 Latency(us) 00:26:19.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.121 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:19.121 nvme0n1 : 2.00 8990.79 1123.85 0.00 0.00 1776.72 477.04 6527.28 00:26:19.121 =================================================================================================================== 00:26:19.121 Total : 8990.79 1123.85 0.00 0.00 1776.72 477.04 6527.28 00:26:19.121 0 00:26:19.121 15:13:34 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:19.121 15:13:34 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:19.121 | .driver_specific 00:26:19.121 | .nvme_error 00:26:19.121 | .status_code 00:26:19.121 | .command_transient_transport_error' 00:26:19.121 15:13:34 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:19.121 15:13:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:19.380 15:13:34 -- host/digest.sh@71 -- # (( 580 > 0 )) 00:26:19.380 15:13:34 -- host/digest.sh@73 -- # killprocess 85765 00:26:19.380 15:13:34 -- common/autotest_common.sh@936 -- # '[' -z 85765 ']' 00:26:19.380 15:13:34 -- common/autotest_common.sh@940 -- # kill -0 85765 00:26:19.380 15:13:34 -- common/autotest_common.sh@941 -- # uname 00:26:19.380 15:13:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:19.380 15:13:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85765 00:26:19.380 15:13:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:19.380 killing process with pid 85765 00:26:19.380 15:13:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:19.380 15:13:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85765' 00:26:19.380 15:13:34 -- common/autotest_common.sh@955 -- # kill 85765 00:26:19.380 Received shutdown signal, test time was about 2.000000 seconds 00:26:19.380 00:26:19.380 Latency(us) 00:26:19.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.380 =================================================================================================================== 00:26:19.380 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:19.380 15:13:34 -- common/autotest_common.sh@960 -- # wait 85765 00:26:19.639 15:13:35 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:19.640 15:13:35 -- host/digest.sh@54 -- # local rw bs qd 00:26:19.640 15:13:35 -- host/digest.sh@56 -- # rw=randwrite 00:26:19.640 15:13:35 -- host/digest.sh@56 -- # bs=4096 00:26:19.640 15:13:35 -- host/digest.sh@56 -- # qd=128 00:26:19.640 15:13:35 -- host/digest.sh@58 -- # bperfpid=85850 00:26:19.640 15:13:35 -- host/digest.sh@60 -- # waitforlisten 85850 /var/tmp/bperf.sock 00:26:19.640 15:13:35 -- common/autotest_common.sh@817 -- # '[' -z 85850 ']' 00:26:19.640 15:13:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:19.640 15:13:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:19.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:19.640 15:13:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:19.640 15:13:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:19.640 15:13:35 -- common/autotest_common.sh@10 -- # set +x 00:26:19.640 15:13:35 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:19.640 [2024-04-18 15:13:35.244822] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:26:19.640 [2024-04-18 15:13:35.244910] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85850 ] 00:26:19.899 [2024-04-18 15:13:35.387829] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.899 [2024-04-18 15:13:35.490220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.466 15:13:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:20.466 15:13:36 -- common/autotest_common.sh@850 -- # return 0 00:26:20.466 15:13:36 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.466 15:13:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:20.724 15:13:36 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:20.724 15:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.724 15:13:36 -- common/autotest_common.sh@10 -- # set +x 00:26:20.724 15:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.724 15:13:36 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.724 15:13:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.983 nvme0n1 00:26:20.983 15:13:36 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:20.983 15:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.983 15:13:36 -- common/autotest_common.sh@10 -- # set +x 00:26:20.983 15:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.983 15:13:36 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:20.983 15:13:36 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.242 Running I/O for 2 seconds... 00:26:21.242 [2024-04-18 15:13:36.775500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ee5c8 00:26:21.242 [2024-04-18 15:13:36.776304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.242 [2024-04-18 15:13:36.776341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.242 [2024-04-18 15:13:36.784909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fac10 00:26:21.242 [2024-04-18 15:13:36.785540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.242 [2024-04-18 15:13:36.785581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:21.242 [2024-04-18 15:13:36.796174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e6300 00:26:21.242 [2024-04-18 15:13:36.796802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.242 [2024-04-18 15:13:36.796840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.242 [2024-04-18 15:13:36.806794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190de8a8 00:26:21.242 [2024-04-18 15:13:36.807586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.242 [2024-04-18 15:13:36.807638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:21.242 [2024-04-18 15:13:36.817040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e01f8 00:26:21.242 [2024-04-18 15:13:36.818097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.242 [2024-04-18 15:13:36.818141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.242 [2024-04-18 15:13:36.826739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e01f8 00:26:21.242 [2024-04-18 15:13:36.827824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.242 [2024-04-18 15:13:36.827868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:21.242 [2024-04-18 15:13:36.838311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fb480 00:26:21.242 [2024-04-18 15:13:36.839717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.242 [2024-04-18 15:13:36.839762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:21.242 [2024-04-18 15:13:36.847059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f0bc0 00:26:21.242 [2024-04-18 15:13:36.847857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.242 [2024-04-18 15:13:36.847898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:21.242 [2024-04-18 15:13:36.857564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190eaab8 00:26:21.242 [2024-04-18 15:13:36.858661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.242 [2024-04-18 15:13:36.858701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:21.242 [2024-04-18 15:13:36.868159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f20d8 00:26:21.242 [2024-04-18 15:13:36.869321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.242 [2024-04-18 15:13:36.869359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:21.242 [2024-04-18 15:13:36.878583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190df118 00:26:21.242 [2024-04-18 15:13:36.879887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.243 [2024-04-18 15:13:36.879920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:21.243 [2024-04-18 15:13:36.888782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fc998 00:26:21.243 [2024-04-18 15:13:36.890133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.243 [2024-04-18 15:13:36.890171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:21.243 [2024-04-18 15:13:36.897076] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e88f8 00:26:21.243 [2024-04-18 15:13:36.897975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.243 [2024-04-18 15:13:36.898008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:21.243 [2024-04-18 15:13:36.907639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fcdd0 00:26:21.243 [2024-04-18 15:13:36.908659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.243 [2024-04-18 15:13:36.908695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:21.243 [2024-04-18 15:13:36.918031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e12d8 00:26:21.243 [2024-04-18 15:13:36.919198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.243 [2024-04-18 15:13:36.919233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:21.243 [2024-04-18 15:13:36.928088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e5ec8 00:26:21.243 [2024-04-18 15:13:36.928826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.243 [2024-04-18 15:13:36.928859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.243 [2024-04-18 15:13:36.937997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ed4e8 00:26:21.243 [2024-04-18 15:13:36.938902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.243 [2024-04-18 15:13:36.938934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:21.243 [2024-04-18 15:13:36.947671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f8e88 00:26:21.503 [2024-04-18 15:13:36.948711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:36.948742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:36.957594] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ef6a8 00:26:21.503 [2024-04-18 15:13:36.958254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:36.958284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:36.967989] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190eea00 00:26:21.503 [2024-04-18 15:13:36.968770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:36.968801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:36.977596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e3498 00:26:21.503 [2024-04-18 15:13:36.978286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:36.978335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:36.987438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ebfd0 00:26:21.503 [2024-04-18 15:13:36.988349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:36.988380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:36.996865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f7100 00:26:21.503 [2024-04-18 15:13:36.997642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:36.997672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.006218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ec840 00:26:21.503 [2024-04-18 15:13:37.006876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.006910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.019048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ea248 00:26:21.503 [2024-04-18 15:13:37.020489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.020524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.027277] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190feb58 00:26:21.503 [2024-04-18 15:13:37.027994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.028022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.037580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f3a28 00:26:21.503 [2024-04-18 15:13:37.038682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.038715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.047674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e23b8 00:26:21.503 [2024-04-18 15:13:37.048356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.048391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.057427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fb480 00:26:21.503 [2024-04-18 15:13:37.058041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.058072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.068965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e4140 00:26:21.503 [2024-04-18 15:13:37.070277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.070313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.078359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fdeb0 00:26:21.503 [2024-04-18 15:13:37.079370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.079408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.088062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e73e0 00:26:21.503 [2024-04-18 15:13:37.089037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.089067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.099372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f5be8 00:26:21.503 [2024-04-18 15:13:37.100741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.100771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.108725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e4578 00:26:21.503 [2024-04-18 15:13:37.109968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.110000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.118127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f7970 00:26:21.503 [2024-04-18 15:13:37.119236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.119265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.127640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ef270 00:26:21.503 [2024-04-18 15:13:37.128637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.128666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.136908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e5ec8 00:26:21.503 [2024-04-18 15:13:37.137687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.137718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.147722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190eaef0 00:26:21.503 [2024-04-18 15:13:37.148306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.148334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.157779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190de470 00:26:21.503 [2024-04-18 15:13:37.158511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.158555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.167521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e84c0 00:26:21.503 [2024-04-18 15:13:37.168402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.168432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.178288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f9b30 00:26:21.503 [2024-04-18 15:13:37.179656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.179699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.185984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e3060 00:26:21.503 [2024-04-18 15:13:37.186820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.186849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.195906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f4f40 00:26:21.503 [2024-04-18 15:13:37.196413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.503 [2024-04-18 15:13:37.196439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:21.503 [2024-04-18 15:13:37.208062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190de8a8 00:26:21.764 [2024-04-18 15:13:37.209630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.209660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.214893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190dfdc0 00:26:21.764 [2024-04-18 15:13:37.215723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.215752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.226483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e9e10 00:26:21.764 [2024-04-18 15:13:37.227814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.227857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.235859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e38d0 00:26:21.764 [2024-04-18 15:13:37.236964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.236995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.245755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fd208 00:26:21.764 [2024-04-18 15:13:37.246635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.246664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.255558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fa7d8 00:26:21.764 [2024-04-18 15:13:37.256685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.256716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.264923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e38d0 00:26:21.764 [2024-04-18 15:13:37.266009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.266043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.274329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e4de8 00:26:21.764 [2024-04-18 15:13:37.275207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.275244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.286462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190eb328 00:26:21.764 [2024-04-18 15:13:37.288042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.288080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.293502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f4b08 00:26:21.764 [2024-04-18 15:13:37.294290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.294325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.303867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190df988 00:26:21.764 [2024-04-18 15:13:37.304604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.304640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.314247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190feb58 00:26:21.764 [2024-04-18 15:13:37.314752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.314780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.325973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190df988 00:26:21.764 [2024-04-18 15:13:37.327170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.327206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.335888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e3d08 00:26:21.764 [2024-04-18 15:13:37.336971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.337009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.345738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190dece0 00:26:21.764 [2024-04-18 15:13:37.346680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.346716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.355631] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fc560 00:26:21.764 [2024-04-18 15:13:37.356411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.356454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.365169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ef6a8 00:26:21.764 [2024-04-18 15:13:37.365822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.365856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.377923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ea680 00:26:21.764 [2024-04-18 15:13:37.379500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.764 [2024-04-18 15:13:37.379546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.764 [2024-04-18 15:13:37.385034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fac10 00:26:21.765 [2024-04-18 15:13:37.385823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.765 [2024-04-18 15:13:37.385854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:21.765 [2024-04-18 15:13:37.397035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f8618 00:26:21.765 [2024-04-18 15:13:37.398237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.765 [2024-04-18 15:13:37.398271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:21.765 [2024-04-18 15:13:37.406250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e5ec8 00:26:21.765 [2024-04-18 15:13:37.407173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.765 [2024-04-18 15:13:37.407206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.765 [2024-04-18 15:13:37.416043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e0a68 00:26:21.765 [2024-04-18 15:13:37.417063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.765 [2024-04-18 15:13:37.417092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:21.765 [2024-04-18 15:13:37.426251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e9e10 00:26:21.765 [2024-04-18 15:13:37.427302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.765 [2024-04-18 15:13:37.427336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:21.765 [2024-04-18 15:13:37.434548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f8a50 00:26:21.765 [2024-04-18 15:13:37.435209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.765 [2024-04-18 15:13:37.435239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:21.765 [2024-04-18 15:13:37.445000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fb480 00:26:21.765 [2024-04-18 15:13:37.445787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.765 [2024-04-18 15:13:37.445819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:21.765 [2024-04-18 15:13:37.456888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190de8a8 00:26:21.765 [2024-04-18 15:13:37.458135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.765 [2024-04-18 15:13:37.458168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:21.765 [2024-04-18 15:13:37.466404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e23b8 00:26:21.765 [2024-04-18 15:13:37.467661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.765 [2024-04-18 15:13:37.467692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.475792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f7538 00:26:22.025 [2024-04-18 15:13:37.476615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.476647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.485576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e12d8 00:26:22.025 [2024-04-18 15:13:37.486398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.486428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.496036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fac10 00:26:22.025 [2024-04-18 15:13:37.497156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.497190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.506234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f6458 00:26:22.025 [2024-04-18 15:13:37.506934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.506968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.515956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e0630 00:26:22.025 [2024-04-18 15:13:37.516559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.516593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.527619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fc128 00:26:22.025 [2024-04-18 15:13:37.528887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.528926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.536848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f0788 00:26:22.025 [2024-04-18 15:13:37.537817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.537851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.546062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f20d8 00:26:22.025 [2024-04-18 15:13:37.546905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.546939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.555798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190eff18 00:26:22.025 [2024-04-18 15:13:37.556640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.556672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.566255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190de470 00:26:22.025 [2024-04-18 15:13:37.567222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.567253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.576368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f7970 00:26:22.025 [2024-04-18 15:13:37.577347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.577380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.586487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e6738 00:26:22.025 [2024-04-18 15:13:37.587941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.587973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.595300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fcdd0 00:26:22.025 [2024-04-18 15:13:37.595980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.596010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.605409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e9e10 00:26:22.025 [2024-04-18 15:13:37.606094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.606125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.617242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f5378 00:26:22.025 [2024-04-18 15:13:37.618097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.618130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.626461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ec408 00:26:22.025 [2024-04-18 15:13:37.627946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.627979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.635196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fda78 00:26:22.025 [2024-04-18 15:13:37.635884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.635914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.645599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f35f0 00:26:22.025 [2024-04-18 15:13:37.646431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.646465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.657669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ebb98 00:26:22.025 [2024-04-18 15:13:37.659088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.659122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.668713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f46d0 00:26:22.025 [2024-04-18 15:13:37.670268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.670310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.678691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fcdd0 00:26:22.025 [2024-04-18 15:13:37.679979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.680017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.688225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ea680 00:26:22.025 [2024-04-18 15:13:37.689352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.689389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.697803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f9f68 00:26:22.025 [2024-04-18 15:13:37.698823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.698858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.707453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f9f68 00:26:22.025 [2024-04-18 15:13:37.708333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.708376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.717206] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f1430 00:26:22.025 [2024-04-18 15:13:37.718094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.718126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:22.025 [2024-04-18 15:13:37.729316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e0a68 00:26:22.025 [2024-04-18 15:13:37.730620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.025 [2024-04-18 15:13:37.730651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.737397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fd208 00:26:22.340 [2024-04-18 15:13:37.738151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.738184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.750229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190de038 00:26:22.340 [2024-04-18 15:13:37.751779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.751810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.760123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f8e88 00:26:22.340 [2024-04-18 15:13:37.761652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.761682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.770320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fd208 00:26:22.340 [2024-04-18 15:13:37.771889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.771920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.780116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190eff18 00:26:22.340 [2024-04-18 15:13:37.781649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.781679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.790666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ec840 00:26:22.340 [2024-04-18 15:13:37.792335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.792367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.797710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ecc78 00:26:22.340 [2024-04-18 15:13:37.798430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.798459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.810287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f0bc0 00:26:22.340 [2024-04-18 15:13:37.811915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.811946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.820300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f5378 00:26:22.340 [2024-04-18 15:13:37.821828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.821858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.830783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190efae0 00:26:22.340 [2024-04-18 15:13:37.832460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.832491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.837789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e49b0 00:26:22.340 [2024-04-18 15:13:37.838667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.838697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.848987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fbcf0 00:26:22.340 [2024-04-18 15:13:37.850161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.850191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.859427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e3060 00:26:22.340 [2024-04-18 15:13:37.860813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.860841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.869706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ebb98 00:26:22.340 [2024-04-18 15:13:37.871320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.871349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.876768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ecc78 00:26:22.340 [2024-04-18 15:13:37.877496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.877525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.886984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fef90 00:26:22.340 [2024-04-18 15:13:37.887711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.887739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.896485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f96f8 00:26:22.340 [2024-04-18 15:13:37.897123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.340 [2024-04-18 15:13:37.897151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:22.340 [2024-04-18 15:13:37.908045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e7818 00:26:22.341 [2024-04-18 15:13:37.909158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.341 [2024-04-18 15:13:37.909186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:22.341 [2024-04-18 15:13:37.917268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e6b70 00:26:22.341 [2024-04-18 15:13:37.918032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.341 [2024-04-18 15:13:37.918063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:22.341 [2024-04-18 15:13:37.927100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190de038 00:26:22.341 [2024-04-18 15:13:37.927994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.341 [2024-04-18 15:13:37.928024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:22.341 [2024-04-18 15:13:37.937021] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f1430 00:26:22.341 [2024-04-18 15:13:37.937493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.341 [2024-04-18 15:13:37.937519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:22.341 [2024-04-18 15:13:37.948510] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f6890 00:26:22.341 [2024-04-18 15:13:37.949697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.341 [2024-04-18 15:13:37.949729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:22.341 [2024-04-18 15:13:37.956483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fbcf0 00:26:22.341 [2024-04-18 15:13:37.957143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.341 [2024-04-18 15:13:37.957171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:22.341 [2024-04-18 15:13:37.967939] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f46d0 00:26:22.341 [2024-04-18 15:13:37.968990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.341 [2024-04-18 15:13:37.969018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:22.341 [2024-04-18 15:13:37.979646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190de470 00:26:22.341 [2024-04-18 15:13:37.981238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.341 [2024-04-18 15:13:37.981269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:22.341 [2024-04-18 15:13:37.986838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190df550 00:26:22.341 [2024-04-18 15:13:37.987661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.341 [2024-04-18 15:13:37.987689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:22.341 [2024-04-18 15:13:37.998726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190dece0 00:26:22.341 [2024-04-18 15:13:37.999981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.341 [2024-04-18 15:13:38.000011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:22.341 [2024-04-18 15:13:38.008176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f0350 00:26:22.341 [2024-04-18 15:13:38.009231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.341 [2024-04-18 15:13:38.009261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:22.341 [2024-04-18 15:13:38.017675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f2948 00:26:22.600 [2024-04-18 15:13:38.018642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.018672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.027132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e5ec8 00:26:22.600 [2024-04-18 15:13:38.027981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.028010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.036544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e84c0 00:26:22.600 [2024-04-18 15:13:38.037213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.037242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.048690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190eb328 00:26:22.600 [2024-04-18 15:13:38.050164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.050196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.055274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f4f40 00:26:22.600 [2024-04-18 15:13:38.055923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.055952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.066683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e6300 00:26:22.600 [2024-04-18 15:13:38.067911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.067940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.076028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f6cc8 00:26:22.600 [2024-04-18 15:13:38.076753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.076783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.085793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e1710 00:26:22.600 [2024-04-18 15:13:38.086761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.086790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.095207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ec840 00:26:22.600 [2024-04-18 15:13:38.096165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.096192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.105160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e23b8 00:26:22.600 [2024-04-18 15:13:38.105695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.105719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.116366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e4578 00:26:22.600 [2024-04-18 15:13:38.117579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.117608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.125726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e1f80 00:26:22.600 [2024-04-18 15:13:38.126887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.126916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.134966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e5a90 00:26:22.600 [2024-04-18 15:13:38.135811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.135842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.143911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f7970 00:26:22.600 [2024-04-18 15:13:38.144599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.144627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.155336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e0a68 00:26:22.600 [2024-04-18 15:13:38.156197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.156226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.164900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e27f0 00:26:22.600 [2024-04-18 15:13:38.165984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.166015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.174747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ea248 00:26:22.600 [2024-04-18 15:13:38.175780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.175810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.184316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e9168 00:26:22.600 [2024-04-18 15:13:38.185413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.600 [2024-04-18 15:13:38.185441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.600 [2024-04-18 15:13:38.194376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e7c50 00:26:22.600 [2024-04-18 15:13:38.195529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.601 [2024-04-18 15:13:38.195570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:22.601 [2024-04-18 15:13:38.203885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fd640 00:26:22.601 [2024-04-18 15:13:38.204986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.601 [2024-04-18 15:13:38.205017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.601 [2024-04-18 15:13:38.213724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ec840 00:26:22.601 [2024-04-18 15:13:38.214863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.601 [2024-04-18 15:13:38.214891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:22.601 [2024-04-18 15:13:38.221842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e7c50 00:26:22.601 [2024-04-18 15:13:38.222562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.601 [2024-04-18 15:13:38.222596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:22.601 [2024-04-18 15:13:38.233792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190feb58 00:26:22.601 [2024-04-18 15:13:38.234955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.601 [2024-04-18 15:13:38.234985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:22.601 [2024-04-18 15:13:38.243254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e0a68 00:26:22.601 [2024-04-18 15:13:38.244425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.601 [2024-04-18 15:13:38.244454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:22.601 [2024-04-18 15:13:38.253340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f81e0 00:26:22.601 [2024-04-18 15:13:38.254587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.601 [2024-04-18 15:13:38.254618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:22.601 [2024-04-18 15:13:38.263411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f35f0 00:26:22.601 [2024-04-18 15:13:38.264653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.601 [2024-04-18 15:13:38.264684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:22.601 [2024-04-18 15:13:38.272905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f4f40 00:26:22.601 [2024-04-18 15:13:38.274029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.601 [2024-04-18 15:13:38.274061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:22.601 [2024-04-18 15:13:38.282633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fe2e8 00:26:22.601 [2024-04-18 15:13:38.283368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.601 [2024-04-18 15:13:38.283398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:22.601 [2024-04-18 15:13:38.292199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e0a68 00:26:22.601 [2024-04-18 15:13:38.292821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.601 [2024-04-18 15:13:38.292847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:22.601 [2024-04-18 15:13:38.303673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ff3c8 00:26:22.601 [2024-04-18 15:13:38.304945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.601 [2024-04-18 15:13:38.304975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.860 [2024-04-18 15:13:38.311704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f31b8 00:26:22.860 [2024-04-18 15:13:38.312459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.860 [2024-04-18 15:13:38.312503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:22.860 [2024-04-18 15:13:38.323223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ed4e8 00:26:22.860 [2024-04-18 15:13:38.324170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.860 [2024-04-18 15:13:38.324199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.332916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e8088 00:26:22.861 [2024-04-18 15:13:38.333914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.333960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.341968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f5be8 00:26:22.861 [2024-04-18 15:13:38.342803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.342831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.351246] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f7da8 00:26:22.861 [2024-04-18 15:13:38.351969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.352001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.363705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e27f0 00:26:22.861 [2024-04-18 15:13:38.365101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.365135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.371116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ed0b0 00:26:22.861 [2024-04-18 15:13:38.371746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.371778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.383201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fa3a0 00:26:22.861 [2024-04-18 15:13:38.384392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.384424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.392455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e8088 00:26:22.861 [2024-04-18 15:13:38.393229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.393262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.402183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190efae0 00:26:22.861 [2024-04-18 15:13:38.403099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.403128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.411998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f46d0 00:26:22.861 [2024-04-18 15:13:38.412886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.412915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.421237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fa3a0 00:26:22.861 [2024-04-18 15:13:38.422039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.422070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.432416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fe720 00:26:22.861 [2024-04-18 15:13:38.433581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.433610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.440177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ebb98 00:26:22.861 [2024-04-18 15:13:38.440784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.440812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.451111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f92c0 00:26:22.861 [2024-04-18 15:13:38.452219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.452247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.460032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e5220 00:26:22.861 [2024-04-18 15:13:38.460788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.460819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.469472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f3e60 00:26:22.861 [2024-04-18 15:13:38.470419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.470449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.479084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fd640 00:26:22.861 [2024-04-18 15:13:38.479644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.479673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.490680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f1ca0 00:26:22.861 [2024-04-18 15:13:38.491986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.492016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.498220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e8d30 00:26:22.861 [2024-04-18 15:13:38.499018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.499045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.508567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190eee38 00:26:22.861 [2024-04-18 15:13:38.509290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.509318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.517592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e6b70 00:26:22.861 [2024-04-18 15:13:38.518268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.518296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.528906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f2510 00:26:22.861 [2024-04-18 15:13:38.529728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.529758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.538454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f1ca0 00:26:22.861 [2024-04-18 15:13:38.539509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.539550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.547723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190eff18 00:26:22.861 [2024-04-18 15:13:38.548665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.548692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:22.861 [2024-04-18 15:13:38.557224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e3d08 00:26:22.861 [2024-04-18 15:13:38.558192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.861 [2024-04-18 15:13:38.558223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.568726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e4de8 00:26:23.121 [2024-04-18 15:13:38.570227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.570258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.578667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e8088 00:26:23.121 [2024-04-18 15:13:38.580230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.580259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.587993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f5be8 00:26:23.121 [2024-04-18 15:13:38.589431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.589458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.596928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ddc00 00:26:23.121 [2024-04-18 15:13:38.597937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.597966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.606540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f31b8 00:26:23.121 [2024-04-18 15:13:38.607651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.607678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.616423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190fc560 00:26:23.121 [2024-04-18 15:13:38.617118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.617146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.626160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f2d80 00:26:23.121 [2024-04-18 15:13:38.627129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.627158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.637335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e4de8 00:26:23.121 [2024-04-18 15:13:38.638876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.638904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.644337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f2d80 00:26:23.121 [2024-04-18 15:13:38.645022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.645049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.654565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190ddc00 00:26:23.121 [2024-04-18 15:13:38.655374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.655402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.665645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e88f8 00:26:23.121 [2024-04-18 15:13:38.666791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.666821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.676071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e3060 00:26:23.121 [2024-04-18 15:13:38.677422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.677451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.684603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190dece0 00:26:23.121 [2024-04-18 15:13:38.686093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.686122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.694930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e0630 00:26:23.121 [2024-04-18 15:13:38.695824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.695854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.704187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f96f8 00:26:23.121 [2024-04-18 15:13:38.704857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.704884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.713354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e84c0 00:26:23.121 [2024-04-18 15:13:38.714017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.714046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.722721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e2c28 00:26:23.121 [2024-04-18 15:13:38.723167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.723192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.733779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190de470 00:26:23.121 [2024-04-18 15:13:38.734965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.121 [2024-04-18 15:13:38.734993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:23.121 [2024-04-18 15:13:38.743267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190f2d80 00:26:23.122 [2024-04-18 15:13:38.744304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.122 [2024-04-18 15:13:38.744343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:23.122 [2024-04-18 15:13:38.752658] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372ad0) with pdu=0x2000190e9e10 00:26:23.122 [2024-04-18 15:13:38.753742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.122 [2024-04-18 15:13:38.753769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:23.122 00:26:23.122 Latency(us) 00:26:23.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.122 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:23.122 nvme0n1 : 2.00 25547.17 99.79 0.00 0.00 5003.86 2408.25 13686.23 00:26:23.122 =================================================================================================================== 00:26:23.122 Total : 25547.17 99.79 0.00 0.00 5003.86 2408.25 13686.23 00:26:23.122 0 00:26:23.122 15:13:38 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:23.122 15:13:38 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:23.122 15:13:38 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:23.122 | .driver_specific 00:26:23.122 | .nvme_error 00:26:23.122 | .status_code 00:26:23.122 | .command_transient_transport_error' 00:26:23.122 15:13:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:23.382 15:13:39 -- host/digest.sh@71 -- # (( 200 > 0 )) 00:26:23.382 15:13:39 -- host/digest.sh@73 -- # killprocess 85850 00:26:23.382 15:13:39 -- common/autotest_common.sh@936 -- # '[' -z 85850 ']' 00:26:23.382 15:13:39 -- common/autotest_common.sh@940 -- # kill -0 85850 00:26:23.382 15:13:39 -- common/autotest_common.sh@941 -- # uname 00:26:23.382 15:13:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:23.382 15:13:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85850 00:26:23.382 15:13:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:23.382 15:13:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:23.382 15:13:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85850' 00:26:23.382 killing process with pid 85850 00:26:23.382 15:13:39 -- common/autotest_common.sh@955 -- # kill 85850 00:26:23.382 Received shutdown signal, test time was about 2.000000 seconds 00:26:23.382 00:26:23.382 Latency(us) 00:26:23.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.382 =================================================================================================================== 00:26:23.382 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.382 15:13:39 -- common/autotest_common.sh@960 -- # wait 85850 00:26:23.642 15:13:39 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:23.642 15:13:39 -- host/digest.sh@54 -- # local rw bs qd 00:26:23.642 15:13:39 -- host/digest.sh@56 -- # rw=randwrite 00:26:23.642 15:13:39 -- host/digest.sh@56 -- # bs=131072 00:26:23.642 15:13:39 -- host/digest.sh@56 -- # qd=16 00:26:23.642 15:13:39 -- host/digest.sh@58 -- # bperfpid=85940 00:26:23.642 15:13:39 -- host/digest.sh@60 -- # waitforlisten 85940 /var/tmp/bperf.sock 00:26:23.642 15:13:39 -- common/autotest_common.sh@817 -- # '[' -z 85940 ']' 00:26:23.642 15:13:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:23.642 15:13:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:23.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:23.642 15:13:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:23.642 15:13:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:23.642 15:13:39 -- common/autotest_common.sh@10 -- # set +x 00:26:23.642 15:13:39 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:23.642 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:23.642 Zero copy mechanism will not be used. 00:26:23.642 [2024-04-18 15:13:39.319749] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:26:23.642 [2024-04-18 15:13:39.319828] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85940 ] 00:26:23.901 [2024-04-18 15:13:39.462841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.901 [2024-04-18 15:13:39.563121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.840 15:13:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:24.840 15:13:40 -- common/autotest_common.sh@850 -- # return 0 00:26:24.840 15:13:40 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:24.840 15:13:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:24.840 15:13:40 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:24.840 15:13:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.840 15:13:40 -- common/autotest_common.sh@10 -- # set +x 00:26:24.840 15:13:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.840 15:13:40 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.840 15:13:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.098 nvme0n1 00:26:25.098 15:13:40 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:25.098 15:13:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.098 15:13:40 -- common/autotest_common.sh@10 -- # set +x 00:26:25.098 15:13:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.098 15:13:40 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:25.098 15:13:40 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:25.357 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:25.357 Zero copy mechanism will not be used. 00:26:25.357 Running I/O for 2 seconds... 00:26:25.357 [2024-04-18 15:13:40.898661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.357 [2024-04-18 15:13:40.899124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.357 [2024-04-18 15:13:40.899161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.357 [2024-04-18 15:13:40.903012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.357 [2024-04-18 15:13:40.903420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.357 [2024-04-18 15:13:40.903452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.357 [2024-04-18 15:13:40.907782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.357 [2024-04-18 15:13:40.908198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.357 [2024-04-18 15:13:40.908228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.357 [2024-04-18 15:13:40.912122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.357 [2024-04-18 15:13:40.912525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.357 [2024-04-18 15:13:40.912560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.357 [2024-04-18 15:13:40.916424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.357 [2024-04-18 15:13:40.916805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.357 [2024-04-18 15:13:40.916831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.357 [2024-04-18 15:13:40.920717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.357 [2024-04-18 15:13:40.921113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.357 [2024-04-18 15:13:40.921137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.357 [2024-04-18 15:13:40.924950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.357 [2024-04-18 15:13:40.925342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.357 [2024-04-18 15:13:40.925368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.357 [2024-04-18 15:13:40.929302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.357 [2024-04-18 15:13:40.929695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.357 [2024-04-18 15:13:40.929721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.357 [2024-04-18 15:13:40.933609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.357 [2024-04-18 15:13:40.934016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.357 [2024-04-18 15:13:40.934041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.357 [2024-04-18 15:13:40.937859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.357 [2024-04-18 15:13:40.938290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.357 [2024-04-18 15:13:40.938316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.357 [2024-04-18 15:13:40.942180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.357 [2024-04-18 15:13:40.942583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.357 [2024-04-18 15:13:40.942609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.357 [2024-04-18 15:13:40.946458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.357 [2024-04-18 15:13:40.946875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.357 [2024-04-18 15:13:40.946903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.357 [2024-04-18 15:13:40.950841] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:40.951256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:40.951295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:40.955200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:40.955621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:40.955648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:40.959498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:40.959923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:40.959949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:40.963911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:40.964315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:40.964341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:40.968333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:40.968724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:40.968748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:40.972608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:40.973003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:40.973028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:40.976884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:40.977304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:40.977330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:40.981285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:40.981691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:40.981712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:40.985583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:40.985989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:40.986015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:40.989844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:40.990252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:40.990278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:40.994212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:40.994625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:40.994650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:40.998613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:40.999024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:40.999049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:41.002948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:41.003351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:41.003376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:41.007264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:41.007673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:41.007698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:41.011651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:41.012037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:41.012062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:41.015878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:41.016262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:41.016287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:41.020269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:41.020681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:41.020706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:41.024695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:41.025075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:41.025100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:41.029062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:41.029427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:41.029453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:41.033506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:41.033951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:41.033977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:41.037774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:41.038199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:41.038225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:41.042169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:41.042574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:41.042599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:41.046505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:41.046912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:41.046936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:41.050865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:41.051257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:41.051282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:41.055155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:41.055530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:41.055565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.358 [2024-04-18 15:13:41.059483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.358 [2024-04-18 15:13:41.059889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.358 [2024-04-18 15:13:41.059914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.063849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.064240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.064265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.068168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.068579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.068603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.072466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.072880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.072905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.076935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.077308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.077331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.081313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.081736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.081763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.085707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.086116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.086142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.090038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.090440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.090466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.094317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.094728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.094755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.098674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.099080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.099106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.103004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.103431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.103457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.107379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.107786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.107814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.111712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.112099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.112127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.116062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.116445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.116472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.120528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.120957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.120985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.124975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.125426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.125454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.129455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.129860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.129912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.133961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.134368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.134394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.138276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.138686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.138713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.142503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.142921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.142951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.146962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.147354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.147381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.151312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.151718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.151742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.155683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.619 [2024-04-18 15:13:41.156076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.619 [2024-04-18 15:13:41.156103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.619 [2024-04-18 15:13:41.160136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.160529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.160567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.164619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.165025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.165055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.169004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.169401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.169430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.173467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.173866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.173904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.177994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.178393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.178423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.182352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.182742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.182770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.186706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.187112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.187155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.191124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.191549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.191579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.195536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.195966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.195997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.199939] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.200382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.200420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.204409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.204848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.204876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.208862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.209289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.209319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.213297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.213691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.213724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.217735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.218160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.218194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.222131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.222556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.222594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.226535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.226954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.226984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.230895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.231297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.231329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.235217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.235629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.235658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.240355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.240773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.240802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.244740] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.245125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.245155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.249091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.249508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.249549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.253444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.253843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.253872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.257856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.258257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.258286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.262214] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.262645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.262673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.266627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.267017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.267045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.271087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.271480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.271512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.275412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.620 [2024-04-18 15:13:41.275834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.620 [2024-04-18 15:13:41.275865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.620 [2024-04-18 15:13:41.279836] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.621 [2024-04-18 15:13:41.280249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.621 [2024-04-18 15:13:41.280281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.621 [2024-04-18 15:13:41.284234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.621 [2024-04-18 15:13:41.284664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.621 [2024-04-18 15:13:41.284694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.621 [2024-04-18 15:13:41.288669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.621 [2024-04-18 15:13:41.289054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.621 [2024-04-18 15:13:41.289081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.621 [2024-04-18 15:13:41.293000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.621 [2024-04-18 15:13:41.293407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.621 [2024-04-18 15:13:41.293437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.621 [2024-04-18 15:13:41.297389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.621 [2024-04-18 15:13:41.297813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.621 [2024-04-18 15:13:41.297844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.621 [2024-04-18 15:13:41.301746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.621 [2024-04-18 15:13:41.302158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.621 [2024-04-18 15:13:41.302187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.621 [2024-04-18 15:13:41.306357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.621 [2024-04-18 15:13:41.306786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.621 [2024-04-18 15:13:41.306813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.621 [2024-04-18 15:13:41.310800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.621 [2024-04-18 15:13:41.311193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.621 [2024-04-18 15:13:41.311225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.621 [2024-04-18 15:13:41.315217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.621 [2024-04-18 15:13:41.315617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.621 [2024-04-18 15:13:41.315642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.621 [2024-04-18 15:13:41.319768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.621 [2024-04-18 15:13:41.320193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.621 [2024-04-18 15:13:41.320223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.324234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.324650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.324683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.328672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.329084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.329113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.332963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.333349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.333378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.337469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.337951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.337981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.341985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.342387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.342418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.346450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.346843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.346872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.350865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.351274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.351305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.355315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.355741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.355771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.359688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.360076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.360106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.364009] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.364433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.364464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.368409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.368829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.368862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.372913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.373355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.373386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.377370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.377785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.377814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.381769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.382173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.382203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.386267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.386678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.386710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.390678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.391075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.391107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.395073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.395512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.395550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.399489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.399916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.399947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.403904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.404310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.404337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.408406] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.408831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.408856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.412793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.413184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.413214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.417264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.417765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.417808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.421681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.422098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.422134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.425958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.426391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.426445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.430833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.431200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.431230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.435095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.435444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.435470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.439307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.439694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.439721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.443286] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.443634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.443660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.447211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.447537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.447573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.450994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.451324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.882 [2024-04-18 15:13:41.451354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.882 [2024-04-18 15:13:41.454877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.882 [2024-04-18 15:13:41.455194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.455235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.458781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.459088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.459117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.462380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.462676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.462706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.466103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.466410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.466434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.469754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.470077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.470099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.473373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.473719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.473743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.476976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.477280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.477303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.480522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.480840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.480867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.484164] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.484482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.484510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.487766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.488054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.488082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.491212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.491523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.491559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.494709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.494996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.495018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.498359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.498432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.498457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.501906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.502035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.502059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.505358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.505479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.505502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.508876] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.508968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.508991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.512353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.512450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.512471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.516018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.516121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.516141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.519568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.519693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.519714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.523151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.523233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.523256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.526816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.526946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.526970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.530438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.530589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.530613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.534087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.534223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.534246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.537657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.537787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.537809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.541202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.541287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.541310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.544672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.544785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.544807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.548369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.548472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.548494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.551827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.551937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.551960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.555408] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.555535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.555562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.559022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.559127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.559149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.562639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.562756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.562779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.566191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.566326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.566349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.569585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.569680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.569701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.573092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.573170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.573190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.576653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.576748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.576769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.580151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.580269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.580292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.883 [2024-04-18 15:13:41.583642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:25.883 [2024-04-18 15:13:41.583743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.883 [2024-04-18 15:13:41.583783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.587203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.587319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.587341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.590719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.590830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.590852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.594358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.594442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.594467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.597976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.598060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.598084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.601463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.601597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.601619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.604952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.605054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.605091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.608529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.608670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.608692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.611987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.612076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.612098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.615634] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.615738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.615761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.619043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.619146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.619170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.622578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.622667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.622690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.626121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.626237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.626261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.629684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.629853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.629890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.633311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.633444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.633469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.636978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.637097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.637122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.640630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.640712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.640737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.644273] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.644353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.644380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.647842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.647931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.647956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.651435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.651557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.651593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.654987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.655114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.145 [2024-04-18 15:13:41.655137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.145 [2024-04-18 15:13:41.658578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.145 [2024-04-18 15:13:41.658686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.658708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.662130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.662237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.662257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.665656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.665754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.665775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.669242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.669341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.669362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.672792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.672870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.672890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.676364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.676501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.676522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.679907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.679992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.680013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.683573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.683687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.683708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.687228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.687353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.687377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.690816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.690927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.690949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.694378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.694475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.694495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.697962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.698038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.698060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.701441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.701599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.701621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.704905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.704970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.704990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.708430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.708562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.708584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.711958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.712085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.712106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.715425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.715505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.715526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.718900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.718990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.719010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.722439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.722617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.722654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.726068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.726164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.726185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.729575] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.729675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.729695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.146 [2024-04-18 15:13:41.732947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.146 [2024-04-18 15:13:41.733068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.146 [2024-04-18 15:13:41.733088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.736541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.736689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.736711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.740085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.740192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.740212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.743582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.743705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.743725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.747032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.747145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.747165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.750509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.750613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.750635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.753952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.754064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.754084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.757355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.757483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.757502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.760935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.761019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.761037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.764392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.764480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.764499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.767933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.768019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.768041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.771482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.771597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.771618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.774923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.775036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.775056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.778384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.778508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.778528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.781819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.781977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.781997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.785253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.785340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.785361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.788856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.788931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.788953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.792366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.792442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.792464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.795910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.796050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.796071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.799461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.799625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.799645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.803018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.803115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.803135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.806486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.806623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.806643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.809999] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.810094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.810114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.813373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.813512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.813531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.816831] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.816898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.816918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.147 [2024-04-18 15:13:41.820269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.147 [2024-04-18 15:13:41.820367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.147 [2024-04-18 15:13:41.820386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.148 [2024-04-18 15:13:41.823754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.148 [2024-04-18 15:13:41.823843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.148 [2024-04-18 15:13:41.823862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.148 [2024-04-18 15:13:41.827282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.148 [2024-04-18 15:13:41.827416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.148 [2024-04-18 15:13:41.827437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.148 [2024-04-18 15:13:41.830823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.148 [2024-04-18 15:13:41.830928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.148 [2024-04-18 15:13:41.830949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.148 [2024-04-18 15:13:41.834315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.148 [2024-04-18 15:13:41.834435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.148 [2024-04-18 15:13:41.834455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.148 [2024-04-18 15:13:41.837799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.148 [2024-04-18 15:13:41.837871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.148 [2024-04-18 15:13:41.837901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.148 [2024-04-18 15:13:41.841318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.148 [2024-04-18 15:13:41.841445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.148 [2024-04-18 15:13:41.841464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.148 [2024-04-18 15:13:41.844745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.148 [2024-04-18 15:13:41.844870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.148 [2024-04-18 15:13:41.844889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.148 [2024-04-18 15:13:41.848297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.148 [2024-04-18 15:13:41.848435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.148 [2024-04-18 15:13:41.848453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.436 [2024-04-18 15:13:41.851764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.436 [2024-04-18 15:13:41.851843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.436 [2024-04-18 15:13:41.851862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.436 [2024-04-18 15:13:41.855121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.436 [2024-04-18 15:13:41.855232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.436 [2024-04-18 15:13:41.855251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.436 [2024-04-18 15:13:41.858442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.436 [2024-04-18 15:13:41.858548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.436 [2024-04-18 15:13:41.858584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.436 [2024-04-18 15:13:41.861757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.436 [2024-04-18 15:13:41.861855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.436 [2024-04-18 15:13:41.861874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.436 [2024-04-18 15:13:41.865100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.436 [2024-04-18 15:13:41.865159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.436 [2024-04-18 15:13:41.865178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.436 [2024-04-18 15:13:41.868462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.436 [2024-04-18 15:13:41.868565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.436 [2024-04-18 15:13:41.868584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.436 [2024-04-18 15:13:41.871765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.436 [2024-04-18 15:13:41.871866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.436 [2024-04-18 15:13:41.871885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.436 [2024-04-18 15:13:41.875303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.436 [2024-04-18 15:13:41.875428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.436 [2024-04-18 15:13:41.875447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.436 [2024-04-18 15:13:41.878851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.436 [2024-04-18 15:13:41.879003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.436 [2024-04-18 15:13:41.879024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.882416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.882493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.882513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.885982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.886053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.886073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.889377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.889499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.889519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.892874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.892957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.892978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.896340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.896435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.896456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.899812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.899947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.899967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.903245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.903331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.903351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.906776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.906882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.906902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.910141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.910209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.910228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.913488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.913636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.913655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.916959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.917026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.917045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.920340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.920448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.920468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.923740] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.923840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.923860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.927215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.927288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.927308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.930547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.930630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.930650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.933889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.934019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.934039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.937186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.937319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.937338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.940710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.940866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.940902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.944140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.944213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.944233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.947662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.947758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.947777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.951208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.951304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.951336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.954763] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.954869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.954889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.958197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.958297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.958321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.961804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.961900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.961920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.965317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.965383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.965403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.968825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.968952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.968972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.437 [2024-04-18 15:13:41.972219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.437 [2024-04-18 15:13:41.972298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.437 [2024-04-18 15:13:41.972318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:41.975698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:41.975828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:41.975847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:41.979226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:41.979333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:41.979353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:41.982721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:41.982840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:41.982859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:41.986115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:41.986185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:41.986205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:41.989557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:41.989669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:41.989689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:41.992912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:41.993000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:41.993019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:41.996340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:41.996455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:41.996474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:41.999927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.000038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.000058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.003306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.003397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.003417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.006904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.006974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.006994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.010404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.010470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.010489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.013812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.013932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.013952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.017258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.017339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.017375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.020788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.020913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.020931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.024251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.024333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.024353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.027720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.027848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.027867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.031235] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.031339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.031360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.034636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.034748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.034768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.038100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.038212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.038232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.041663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.041734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.041754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.045079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.045203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.045222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.048532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.048673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.048692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.051962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.052024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.052043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.055501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.055653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.055673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.058945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.059057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.059076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.062422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.062502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.062522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.438 [2024-04-18 15:13:42.065831] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.438 [2024-04-18 15:13:42.065990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.438 [2024-04-18 15:13:42.066010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.439 [2024-04-18 15:13:42.069204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.439 [2024-04-18 15:13:42.069295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-18 15:13:42.069314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.439 [2024-04-18 15:13:42.072622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.439 [2024-04-18 15:13:42.072705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-18 15:13:42.072723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.439 [2024-04-18 15:13:42.075979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.439 [2024-04-18 15:13:42.076132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-18 15:13:42.076150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.439 [2024-04-18 15:13:42.079352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.439 [2024-04-18 15:13:42.079528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-18 15:13:42.079549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.439 [2024-04-18 15:13:42.082904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.439 [2024-04-18 15:13:42.082992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-18 15:13:42.083012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.439 [2024-04-18 15:13:42.086447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.439 [2024-04-18 15:13:42.086528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-18 15:13:42.086550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.439 [2024-04-18 15:13:42.089938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.439 [2024-04-18 15:13:42.090069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-18 15:13:42.090089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.439 [2024-04-18 15:13:42.093402] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.439 [2024-04-18 15:13:42.093530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-18 15:13:42.093551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.439 [2024-04-18 15:13:42.096996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.439 [2024-04-18 15:13:42.097277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-18 15:13:42.097320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.439 [2024-04-18 15:13:42.100688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.439 [2024-04-18 15:13:42.100906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-18 15:13:42.100934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.439 [2024-04-18 15:13:42.104191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.439 [2024-04-18 15:13:42.104468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-18 15:13:42.104491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.439 [2024-04-18 15:13:42.107718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.439 [2024-04-18 15:13:42.107923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-18 15:13:42.107950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.439 [2024-04-18 15:13:42.111191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.439 [2024-04-18 15:13:42.111310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-18 15:13:42.111341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.439 [2024-04-18 15:13:42.114691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.439 [2024-04-18 15:13:42.114801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-18 15:13:42.114824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.118151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.118352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.118376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.121710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.121995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.122023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.125138] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.125329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.125354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.128725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.128933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.128954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.132255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.132342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.132363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.135676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.135821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.135840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.139016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.139185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.139205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.142483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.142610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.142630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.145951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.146074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.146094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.149424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.149597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.149616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.152834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.152935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.152954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.156303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.156451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.156470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.159781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.159930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.159950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.163191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.163360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.163380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.166459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.166595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.166616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.169831] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.170002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.170021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.173302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.173418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.173438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.176783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.176942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.176962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.180312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.180446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.180468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.183714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.183784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.183816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.187177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.714 [2024-04-18 15:13:42.187332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-04-18 15:13:42.187360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.714 [2024-04-18 15:13:42.190704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.190804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.190825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.194177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.194311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.194332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.197628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.197740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.197760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.201221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.201371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.201390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.204805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.204919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.204951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.208318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.208465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.208485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.211844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.211971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.211992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.215269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.215425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.215445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.218754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.218911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.218932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.222242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.222403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.222423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.225658] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.225800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.225820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.229126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.229294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.229314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.232604] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.232695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.232715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.236077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.236224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.236244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.239582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.239757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.239778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.243056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.243191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.243212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.246397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.246563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.246584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.249895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.250010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.250030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.253345] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.253494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.253514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.256820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.256965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.256984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.260267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.260411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.260431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.263701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.263826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.263845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.267082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.267241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.267262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.270591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.270807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.270826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.274020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.274154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.274176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.277507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.277646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.277666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.715 [2024-04-18 15:13:42.281042] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.715 [2024-04-18 15:13:42.281189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.715 [2024-04-18 15:13:42.281209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.284491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.284684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.284704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.288000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.288206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.288233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.291542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.291701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.291722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.294972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.295124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.295144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.298392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.298561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.298581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.301855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.302027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.302046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.305320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.305476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.305502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.308776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.308904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.308930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.312286] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.312439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.312466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.315851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.315963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.315984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.319369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.319513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.319549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.322855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.322996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.323017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.326394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.326524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.326544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.329942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.330079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.330099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.333352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.333450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.333470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.336903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.337005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.337025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.340422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.340551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.340584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.344038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.344167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.344188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.347587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.347758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.347779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.351055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.351174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.351194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.354640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.354773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.354793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.358090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.358196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.358218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.361659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.361742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.361763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.365155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.365281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.365301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.368691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.368846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.368874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.372224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.372362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.372385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.716 [2024-04-18 15:13:42.375715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.716 [2024-04-18 15:13:42.375877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.716 [2024-04-18 15:13:42.375904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.717 [2024-04-18 15:13:42.379163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.717 [2024-04-18 15:13:42.379338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.717 [2024-04-18 15:13:42.379365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.717 [2024-04-18 15:13:42.382674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.717 [2024-04-18 15:13:42.382768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.717 [2024-04-18 15:13:42.382791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.717 [2024-04-18 15:13:42.386207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.717 [2024-04-18 15:13:42.386328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.717 [2024-04-18 15:13:42.386359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.717 [2024-04-18 15:13:42.389663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.717 [2024-04-18 15:13:42.389828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.717 [2024-04-18 15:13:42.389856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.717 [2024-04-18 15:13:42.393231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.717 [2024-04-18 15:13:42.393353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.717 [2024-04-18 15:13:42.393375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.717 [2024-04-18 15:13:42.396828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.717 [2024-04-18 15:13:42.396923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.717 [2024-04-18 15:13:42.396947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.717 [2024-04-18 15:13:42.400238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.717 [2024-04-18 15:13:42.400388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.717 [2024-04-18 15:13:42.400414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.717 [2024-04-18 15:13:42.403712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.717 [2024-04-18 15:13:42.403835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.717 [2024-04-18 15:13:42.403858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.717 [2024-04-18 15:13:42.407227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.717 [2024-04-18 15:13:42.407373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.717 [2024-04-18 15:13:42.407404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.717 [2024-04-18 15:13:42.411940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.717 [2024-04-18 15:13:42.412067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.717 [2024-04-18 15:13:42.412098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.998 [2024-04-18 15:13:42.415745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.998 [2024-04-18 15:13:42.415920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-04-18 15:13:42.415950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.998 [2024-04-18 15:13:42.419393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.998 [2024-04-18 15:13:42.419571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-04-18 15:13:42.419598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.998 [2024-04-18 15:13:42.422940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.998 [2024-04-18 15:13:42.423076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-04-18 15:13:42.423098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.998 [2024-04-18 15:13:42.426458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.998 [2024-04-18 15:13:42.426625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.998 [2024-04-18 15:13:42.426647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.998 [2024-04-18 15:13:42.430011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.998 [2024-04-18 15:13:42.430119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.430142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.433410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.433563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.433585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.436938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.437129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.437156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.440464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.440632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.440660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.444555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.444685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.444710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.448991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.449079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.449105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.453515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.453650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.453678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.457086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.457176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.457200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.460721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.460997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.461041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.464268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.464361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.464391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.467807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.467906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.467931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.471346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.471454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.471477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.474789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.474879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.474902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.478283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.478394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.478417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.481788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.481899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.481922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.485309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.485412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.485435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.488883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.488999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.489026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.492409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.492492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.492514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.495886] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.495972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.495992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.499459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.499556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.499579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.502912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.503042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.503064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.506365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.506462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.506485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.509866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.509986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.510007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.513370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.513479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.513500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.516881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.516981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.517003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.520352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.520440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.520473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.523845] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.523949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.999 [2024-04-18 15:13:42.523970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.999 [2024-04-18 15:13:42.527393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:26.999 [2024-04-18 15:13:42.527479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.527500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.530927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.531023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.531045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.534478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.534564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.534584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.537944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.538099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.538120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.541462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.541547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.541579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.544959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.545075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.545097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.548467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.548644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.548665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.552054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.552145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.552166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.555631] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.555766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.555787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.559137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.559221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.559242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.562729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.562819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.562840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.566228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.566397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.566419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.569729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.569814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.569852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.573227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.573341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.573361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.576663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.576801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.576823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.580254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.580325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.580347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.583856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.583995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.584018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.587393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.587486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.587508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.590986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.591157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.591179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.594522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.594701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.594725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.598048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.598127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.598149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.601598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.601688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.601711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.605067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.605230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.605252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.608673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.608775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.608799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.612253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.612367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.612389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.615881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.615968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.615991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.619360] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.619451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.000 [2024-04-18 15:13:42.619474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.000 [2024-04-18 15:13:42.622971] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.000 [2024-04-18 15:13:42.623050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.623073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.626423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.626522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.626558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.629992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.630085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.630108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.633479] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.633598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.633621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.637075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.637172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.637212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.640723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.640848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.640889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.644347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.644446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.644470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.647966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.648049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.648073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.651513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.651699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.651730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.655185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.655309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.655332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.658718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.658803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.658826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.662256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.662339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.662363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.665759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.665834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.665857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.669321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.669454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.669478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.672921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.673009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.673031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.676372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.676512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.676533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.679960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.680059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.680082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.683558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.683640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.683663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.687030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.687147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.687169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.690599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.690726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.690748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.694169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.694269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.694292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.697680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.697774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.697796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.001 [2024-04-18 15:13:42.701269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.001 [2024-04-18 15:13:42.701394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.001 [2024-04-18 15:13:42.701417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.261 [2024-04-18 15:13:42.704829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.261 [2024-04-18 15:13:42.704973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.261 [2024-04-18 15:13:42.704995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.261 [2024-04-18 15:13:42.708367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.261 [2024-04-18 15:13:42.708468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.261 [2024-04-18 15:13:42.708490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.261 [2024-04-18 15:13:42.711939] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.261 [2024-04-18 15:13:42.712064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.261 [2024-04-18 15:13:42.712085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.261 [2024-04-18 15:13:42.715522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.261 [2024-04-18 15:13:42.715633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.261 [2024-04-18 15:13:42.715655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.261 [2024-04-18 15:13:42.719113] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.261 [2024-04-18 15:13:42.719181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.261 [2024-04-18 15:13:42.719202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.261 [2024-04-18 15:13:42.722660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.261 [2024-04-18 15:13:42.722760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.261 [2024-04-18 15:13:42.722783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.261 [2024-04-18 15:13:42.726183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.261 [2024-04-18 15:13:42.726286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.261 [2024-04-18 15:13:42.726308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.261 [2024-04-18 15:13:42.729706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.261 [2024-04-18 15:13:42.729835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.261 [2024-04-18 15:13:42.729857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.261 [2024-04-18 15:13:42.733204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.261 [2024-04-18 15:13:42.733317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.733338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.736703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.736824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.736845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.740136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.740276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.740298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.743680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.743760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.743781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.747072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.747166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.747189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.750597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.750687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.750710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.754152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.754289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.754311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.757707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.757809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.757830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.761286] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.761425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.761446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.764767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.764886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.764906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.768265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.768345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.768369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.771747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.771880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.771901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.775212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.775318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.775339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.778806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.778933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.778954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.782422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.782549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.782570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.786078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.786153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.786174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.789508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.789663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.789684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.793038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.793153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.793173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.796571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.796651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.796672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.800013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.800107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.800129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.803468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.803665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.803686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.807072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.807160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.807180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.810659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.810757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.810778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.814172] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.814244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.814266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.817710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.817864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.817895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.821109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.821264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.821285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.824620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.262 [2024-04-18 15:13:42.824728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.262 [2024-04-18 15:13:42.824753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.262 [2024-04-18 15:13:42.828125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.828200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.828226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.831676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.831912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.831940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.835129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.835217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.835240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.838624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.838756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.838777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.842110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.842239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.842260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.845514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.845594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.845615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.848975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.849089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.849110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.852429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.852520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.852557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.855983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.856084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.856106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.859620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.859689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.859710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.863003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.863122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.863143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.866552] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.866657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.866681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.870054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.870170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.870192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.873583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.873683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.873719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.877117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.877204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.877227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.880648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.880734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.880757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.263 [2024-04-18 15:13:42.884118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1372e10) with pdu=0x2000190fef90 00:26:27.263 [2024-04-18 15:13:42.884186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.263 [2024-04-18 15:13:42.884209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.263 00:26:27.263 Latency(us) 00:26:27.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.263 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:27.263 nvme0n1 : 2.00 8271.21 1033.90 0.00 0.00 1930.69 1414.68 10948.99 00:26:27.263 =================================================================================================================== 00:26:27.263 Total : 8271.21 1033.90 0.00 0.00 1930.69 1414.68 10948.99 00:26:27.263 0 00:26:27.263 15:13:42 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:27.263 15:13:42 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:27.263 | .driver_specific 00:26:27.263 | .nvme_error 00:26:27.263 | .status_code 00:26:27.263 | .command_transient_transport_error' 00:26:27.263 15:13:42 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:27.263 15:13:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:27.523 15:13:43 -- host/digest.sh@71 -- # (( 534 > 0 )) 00:26:27.523 15:13:43 -- host/digest.sh@73 -- # killprocess 85940 00:26:27.523 15:13:43 -- common/autotest_common.sh@936 -- # '[' -z 85940 ']' 00:26:27.523 15:13:43 -- common/autotest_common.sh@940 -- # kill -0 85940 00:26:27.523 15:13:43 -- common/autotest_common.sh@941 -- # uname 00:26:27.523 15:13:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:27.523 15:13:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85940 00:26:27.523 15:13:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:27.523 15:13:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:27.523 killing process with pid 85940 00:26:27.523 15:13:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85940' 00:26:27.523 Received shutdown signal, test time was about 2.000000 seconds 00:26:27.523 00:26:27.523 Latency(us) 00:26:27.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.523 =================================================================================================================== 00:26:27.523 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.523 15:13:43 -- common/autotest_common.sh@955 -- # kill 85940 00:26:27.523 15:13:43 -- common/autotest_common.sh@960 -- # wait 85940 00:26:27.780 15:13:43 -- host/digest.sh@116 -- # killprocess 85630 00:26:27.780 15:13:43 -- common/autotest_common.sh@936 -- # '[' -z 85630 ']' 00:26:27.780 15:13:43 -- common/autotest_common.sh@940 -- # kill -0 85630 00:26:27.780 15:13:43 -- common/autotest_common.sh@941 -- # uname 00:26:27.780 15:13:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:27.780 15:13:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85630 00:26:27.780 15:13:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:27.780 killing process with pid 85630 00:26:27.780 15:13:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:27.780 15:13:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85630' 00:26:27.780 15:13:43 -- common/autotest_common.sh@955 -- # kill 85630 00:26:27.780 15:13:43 -- common/autotest_common.sh@960 -- # wait 85630 00:26:28.038 00:26:28.038 real 0m17.879s 00:26:28.038 user 0m32.735s 00:26:28.038 sys 0m5.392s 00:26:28.038 15:13:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:28.038 15:13:43 -- common/autotest_common.sh@10 -- # set +x 00:26:28.038 ************************************ 00:26:28.038 END TEST nvmf_digest_error 00:26:28.038 ************************************ 00:26:28.038 15:13:43 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:28.038 15:13:43 -- host/digest.sh@150 -- # nvmftestfini 00:26:28.038 15:13:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:28.038 15:13:43 -- nvmf/common.sh@117 -- # sync 00:26:28.297 15:13:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:28.297 15:13:43 -- nvmf/common.sh@120 -- # set +e 00:26:28.297 15:13:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:28.297 15:13:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:28.297 rmmod nvme_tcp 00:26:28.297 rmmod nvme_fabrics 00:26:28.297 rmmod nvme_keyring 00:26:28.297 15:13:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:28.297 15:13:43 -- nvmf/common.sh@124 -- # set -e 00:26:28.297 15:13:43 -- nvmf/common.sh@125 -- # return 0 00:26:28.297 15:13:43 -- nvmf/common.sh@478 -- # '[' -n 85630 ']' 00:26:28.297 15:13:43 -- nvmf/common.sh@479 -- # killprocess 85630 00:26:28.297 15:13:43 -- common/autotest_common.sh@936 -- # '[' -z 85630 ']' 00:26:28.297 15:13:43 -- common/autotest_common.sh@940 -- # kill -0 85630 00:26:28.297 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (85630) - No such process 00:26:28.297 Process with pid 85630 is not found 00:26:28.297 15:13:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 85630 is not found' 00:26:28.297 15:13:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:28.297 15:13:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:28.297 15:13:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:28.297 15:13:43 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:28.297 15:13:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:28.297 15:13:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.297 15:13:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.297 15:13:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.297 15:13:43 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:28.297 00:26:28.297 real 0m37.447s 00:26:28.297 user 1m7.252s 00:26:28.297 sys 0m11.188s 00:26:28.297 15:13:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:28.297 15:13:43 -- common/autotest_common.sh@10 -- # set +x 00:26:28.297 ************************************ 00:26:28.297 END TEST nvmf_digest 00:26:28.297 ************************************ 00:26:28.297 15:13:43 -- nvmf/nvmf.sh@108 -- # [[ 1 -eq 1 ]] 00:26:28.297 15:13:43 -- nvmf/nvmf.sh@108 -- # [[ tcp == \t\c\p ]] 00:26:28.297 15:13:43 -- nvmf/nvmf.sh@110 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:26:28.297 15:13:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:28.297 15:13:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:28.297 15:13:43 -- common/autotest_common.sh@10 -- # set +x 00:26:28.556 ************************************ 00:26:28.556 START TEST nvmf_mdns_discovery 00:26:28.556 ************************************ 00:26:28.556 15:13:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:26:28.556 * Looking for test storage... 00:26:28.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:28.556 15:13:44 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:28.556 15:13:44 -- nvmf/common.sh@7 -- # uname -s 00:26:28.556 15:13:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.556 15:13:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.556 15:13:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.556 15:13:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.556 15:13:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.556 15:13:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.556 15:13:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.556 15:13:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.815 15:13:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.815 15:13:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.815 15:13:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:26:28.815 15:13:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:26:28.815 15:13:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.815 15:13:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.815 15:13:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:28.815 15:13:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.815 15:13:44 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:28.815 15:13:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.815 15:13:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.815 15:13:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.815 15:13:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.815 15:13:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.815 15:13:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.815 15:13:44 -- paths/export.sh@5 -- # export PATH 00:26:28.815 15:13:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.815 15:13:44 -- nvmf/common.sh@47 -- # : 0 00:26:28.815 15:13:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:28.815 15:13:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:28.815 15:13:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.815 15:13:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.815 15:13:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.815 15:13:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:28.816 15:13:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:28.816 15:13:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:28.816 15:13:44 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:26:28.816 15:13:44 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:26:28.816 15:13:44 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:28.816 15:13:44 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:28.816 15:13:44 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:26:28.816 15:13:44 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:28.816 15:13:44 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:26:28.816 15:13:44 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:26:28.816 15:13:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:28.816 15:13:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.816 15:13:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:28.816 15:13:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:28.816 15:13:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:28.816 15:13:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.816 15:13:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.816 15:13:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.816 15:13:44 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:28.816 15:13:44 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:28.816 15:13:44 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:28.816 15:13:44 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:28.816 15:13:44 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:28.816 15:13:44 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:28.816 15:13:44 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.816 15:13:44 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.816 15:13:44 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:28.816 15:13:44 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:28.816 15:13:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:28.816 15:13:44 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:28.816 15:13:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:28.816 15:13:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.816 15:13:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:28.816 15:13:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:28.816 15:13:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:28.816 15:13:44 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:28.816 15:13:44 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:28.816 15:13:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:28.816 Cannot find device "nvmf_tgt_br" 00:26:28.816 15:13:44 -- nvmf/common.sh@155 -- # true 00:26:28.816 15:13:44 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:28.816 Cannot find device "nvmf_tgt_br2" 00:26:28.816 15:13:44 -- nvmf/common.sh@156 -- # true 00:26:28.816 15:13:44 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:28.816 15:13:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:28.816 Cannot find device "nvmf_tgt_br" 00:26:28.816 15:13:44 -- nvmf/common.sh@158 -- # true 00:26:28.816 15:13:44 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:28.816 Cannot find device "nvmf_tgt_br2" 00:26:28.816 15:13:44 -- nvmf/common.sh@159 -- # true 00:26:28.816 15:13:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:28.816 15:13:44 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:28.816 15:13:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:28.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:28.816 15:13:44 -- nvmf/common.sh@162 -- # true 00:26:28.816 15:13:44 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:28.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:28.816 15:13:44 -- nvmf/common.sh@163 -- # true 00:26:28.816 15:13:44 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:28.816 15:13:44 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:28.816 15:13:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:28.816 15:13:44 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:29.075 15:13:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:29.075 15:13:44 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:29.075 15:13:44 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:29.075 15:13:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:29.075 15:13:44 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:29.075 15:13:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:29.075 15:13:44 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:29.075 15:13:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:29.075 15:13:44 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:29.075 15:13:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:29.075 15:13:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:29.075 15:13:44 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:29.075 15:13:44 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:29.075 15:13:44 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:29.075 15:13:44 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:29.075 15:13:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:29.076 15:13:44 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:29.076 15:13:44 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:29.076 15:13:44 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:29.076 15:13:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:29.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:26:29.076 00:26:29.076 --- 10.0.0.2 ping statistics --- 00:26:29.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.076 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:26:29.076 15:13:44 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:29.076 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:29.076 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:26:29.076 00:26:29.076 --- 10.0.0.3 ping statistics --- 00:26:29.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.076 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:26:29.076 15:13:44 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:29.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:26:29.076 00:26:29.076 --- 10.0.0.1 ping statistics --- 00:26:29.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.076 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:26:29.076 15:13:44 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.076 15:13:44 -- nvmf/common.sh@422 -- # return 0 00:26:29.076 15:13:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:29.076 15:13:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.076 15:13:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:29.076 15:13:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:29.076 15:13:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.076 15:13:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:29.076 15:13:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:29.076 15:13:44 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:26:29.076 15:13:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:29.076 15:13:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:29.076 15:13:44 -- common/autotest_common.sh@10 -- # set +x 00:26:29.076 15:13:44 -- nvmf/common.sh@470 -- # nvmfpid=86238 00:26:29.076 15:13:44 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:26:29.076 15:13:44 -- nvmf/common.sh@471 -- # waitforlisten 86238 00:26:29.076 15:13:44 -- common/autotest_common.sh@817 -- # '[' -z 86238 ']' 00:26:29.076 15:13:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.076 15:13:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:29.076 15:13:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.076 15:13:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:29.076 15:13:44 -- common/autotest_common.sh@10 -- # set +x 00:26:29.335 [2024-04-18 15:13:44.810495] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:26:29.335 [2024-04-18 15:13:44.810592] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.335 [2024-04-18 15:13:44.954381] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.594 [2024-04-18 15:13:45.054668] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.594 [2024-04-18 15:13:45.054729] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.594 [2024-04-18 15:13:45.054740] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.594 [2024-04-18 15:13:45.054749] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.594 [2024-04-18 15:13:45.054757] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.594 [2024-04-18 15:13:45.054791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.162 15:13:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:30.162 15:13:45 -- common/autotest_common.sh@850 -- # return 0 00:26:30.162 15:13:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:30.162 15:13:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:30.162 15:13:45 -- common/autotest_common.sh@10 -- # set +x 00:26:30.162 15:13:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.162 15:13:45 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:26:30.162 15:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.162 15:13:45 -- common/autotest_common.sh@10 -- # set +x 00:26:30.162 15:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.162 15:13:45 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:26:30.162 15:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.162 15:13:45 -- common/autotest_common.sh@10 -- # set +x 00:26:30.162 15:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.162 15:13:45 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:30.162 15:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.162 15:13:45 -- common/autotest_common.sh@10 -- # set +x 00:26:30.162 [2024-04-18 15:13:45.865765] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.422 15:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.422 15:13:45 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:30.422 15:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.422 15:13:45 -- common/autotest_common.sh@10 -- # set +x 00:26:30.422 [2024-04-18 15:13:45.877898] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:30.422 15:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.422 15:13:45 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:30.422 15:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.422 15:13:45 -- common/autotest_common.sh@10 -- # set +x 00:26:30.422 null0 00:26:30.422 15:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.422 15:13:45 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:30.422 15:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.422 15:13:45 -- common/autotest_common.sh@10 -- # set +x 00:26:30.422 null1 00:26:30.422 15:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.422 15:13:45 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:26:30.422 15:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.422 15:13:45 -- common/autotest_common.sh@10 -- # set +x 00:26:30.422 null2 00:26:30.422 15:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.422 15:13:45 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:26:30.422 15:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.422 15:13:45 -- common/autotest_common.sh@10 -- # set +x 00:26:30.422 null3 00:26:30.422 15:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.422 15:13:45 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:26:30.422 15:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.422 15:13:45 -- common/autotest_common.sh@10 -- # set +x 00:26:30.422 15:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.422 15:13:45 -- host/mdns_discovery.sh@47 -- # hostpid=86288 00:26:30.422 15:13:45 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:30.422 15:13:45 -- host/mdns_discovery.sh@48 -- # waitforlisten 86288 /tmp/host.sock 00:26:30.422 15:13:45 -- common/autotest_common.sh@817 -- # '[' -z 86288 ']' 00:26:30.422 15:13:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:26:30.422 15:13:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:30.422 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:30.422 15:13:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:30.422 15:13:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:30.422 15:13:45 -- common/autotest_common.sh@10 -- # set +x 00:26:30.422 [2024-04-18 15:13:45.988603] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:26:30.422 [2024-04-18 15:13:45.988674] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86288 ] 00:26:30.681 [2024-04-18 15:13:46.133582] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.681 [2024-04-18 15:13:46.236520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.250 15:13:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:31.250 15:13:46 -- common/autotest_common.sh@850 -- # return 0 00:26:31.250 15:13:46 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:26:31.250 15:13:46 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:26:31.250 15:13:46 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:26:31.510 15:13:46 -- host/mdns_discovery.sh@57 -- # avahipid=86313 00:26:31.510 15:13:46 -- host/mdns_discovery.sh@58 -- # sleep 1 00:26:31.510 15:13:47 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:26:31.510 15:13:47 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:26:31.510 Process 1013 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:26:31.510 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:26:31.510 Successfully dropped root privileges. 00:26:31.510 avahi-daemon 0.8 starting up. 00:26:31.510 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:26:31.510 Successfully called chroot(). 00:26:31.510 Successfully dropped remaining capabilities. 00:26:31.510 No service file found in /etc/avahi/services. 00:26:32.449 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:26:32.449 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:26:32.449 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:26:32.449 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:26:32.449 Network interface enumeration completed. 00:26:32.449 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:26:32.449 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:26:32.449 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:26:32.449 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:26:32.449 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 2612500088. 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:32.449 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.449 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.449 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:26:32.449 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.449 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.449 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:32.449 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:26:32.449 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@68 -- # sort 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@68 -- # xargs 00:26:32.449 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@64 -- # sort 00:26:32.449 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@64 -- # xargs 00:26:32.449 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.449 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:26:32.449 15:13:48 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:32.449 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.449 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.712 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:32.712 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:26:32.712 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@68 -- # xargs 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@68 -- # sort 00:26:32.712 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:32.712 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@64 -- # sort 00:26:32.712 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@64 -- # xargs 00:26:32.712 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:32.712 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.712 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.712 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@68 -- # sort 00:26:32.712 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.712 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@68 -- # xargs 00:26:32.712 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.712 [2024-04-18 15:13:48.325852] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.712 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:32.712 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@64 -- # sort 00:26:32.712 15:13:48 -- host/mdns_discovery.sh@64 -- # xargs 00:26:32.712 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.972 15:13:48 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:26:32.972 15:13:48 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:32.972 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.972 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.972 [2024-04-18 15:13:48.426460] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.972 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.972 15:13:48 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:32.972 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.972 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.972 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.972 15:13:48 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:26:32.972 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.972 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.972 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.972 15:13:48 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:26:32.972 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.972 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.972 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.972 15:13:48 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:26:32.972 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.972 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.972 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.972 15:13:48 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:26:32.972 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.972 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.972 [2024-04-18 15:13:48.486315] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:26:32.972 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.972 15:13:48 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:26:32.972 15:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:32.972 15:13:48 -- common/autotest_common.sh@10 -- # set +x 00:26:32.972 [2024-04-18 15:13:48.498275] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:32.972 15:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:32.972 15:13:48 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=86364 00:26:32.972 15:13:48 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:26:32.972 15:13:48 -- host/mdns_discovery.sh@125 -- # sleep 5 00:26:33.540 [2024-04-18 15:13:49.226237] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:26:33.800 Established under name 'CDC' 00:26:34.060 [2024-04-18 15:13:49.625653] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:26:34.060 [2024-04-18 15:13:49.625721] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:26:34.060 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:26:34.060 cookie is 0 00:26:34.060 is_local: 1 00:26:34.060 our_own: 0 00:26:34.060 wide_area: 0 00:26:34.060 multicast: 1 00:26:34.060 cached: 1 00:26:34.060 [2024-04-18 15:13:49.725459] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:26:34.060 [2024-04-18 15:13:49.725511] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:26:34.060 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:26:34.060 cookie is 0 00:26:34.060 is_local: 1 00:26:34.060 our_own: 0 00:26:34.060 wide_area: 0 00:26:34.060 multicast: 1 00:26:34.060 cached: 1 00:26:34.999 [2024-04-18 15:13:50.632311] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:26:34.999 [2024-04-18 15:13:50.632358] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:26:34.999 [2024-04-18 15:13:50.632375] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:35.259 [2024-04-18 15:13:50.718307] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:26:35.259 [2024-04-18 15:13:50.731737] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:35.259 [2024-04-18 15:13:50.731797] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:35.259 [2024-04-18 15:13:50.731815] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:35.259 [2024-04-18 15:13:50.780093] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:26:35.259 [2024-04-18 15:13:50.780150] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:26:35.259 [2024-04-18 15:13:50.818253] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:26:35.259 [2024-04-18 15:13:50.873774] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:26:35.259 [2024-04-18 15:13:50.873830] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:26:38.560 15:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:26:38.560 15:13:53 -- common/autotest_common.sh@10 -- # set +x 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@80 -- # sort 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@80 -- # xargs 00:26:38.560 15:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:38.560 15:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.560 15:13:53 -- common/autotest_common.sh@10 -- # set +x 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@76 -- # sort 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@76 -- # xargs 00:26:38.560 15:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@68 -- # sort 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:38.560 15:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@68 -- # xargs 00:26:38.560 15:13:53 -- common/autotest_common.sh@10 -- # set +x 00:26:38.560 15:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@64 -- # sort 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:38.560 15:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.560 15:13:53 -- common/autotest_common.sh@10 -- # set +x 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@64 -- # xargs 00:26:38.560 15:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:26:38.560 15:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.560 15:13:53 -- common/autotest_common.sh@10 -- # set +x 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@72 -- # sort -n 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@72 -- # xargs 00:26:38.560 15:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:26:38.560 15:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.560 15:13:53 -- common/autotest_common.sh@10 -- # set +x 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@72 -- # sort -n 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@72 -- # xargs 00:26:38.560 15:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:26:38.560 15:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.560 15:13:53 -- common/autotest_common.sh@10 -- # set +x 00:26:38.560 15:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:38.560 15:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.560 15:13:53 -- common/autotest_common.sh@10 -- # set +x 00:26:38.560 15:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:26:38.560 15:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:38.560 15:13:53 -- common/autotest_common.sh@10 -- # set +x 00:26:38.560 15:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:38.560 15:13:53 -- host/mdns_discovery.sh@139 -- # sleep 1 00:26:39.499 15:13:54 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:26:39.499 15:13:54 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.499 15:13:54 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:39.499 15:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.499 15:13:54 -- common/autotest_common.sh@10 -- # set +x 00:26:39.499 15:13:54 -- host/mdns_discovery.sh@64 -- # xargs 00:26:39.499 15:13:54 -- host/mdns_discovery.sh@64 -- # sort 00:26:39.499 15:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.499 15:13:54 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:39.499 15:13:54 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:26:39.499 15:13:54 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:39.499 15:13:54 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:26:39.499 15:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.499 15:13:54 -- common/autotest_common.sh@10 -- # set +x 00:26:39.499 15:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.499 15:13:55 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:26:39.499 15:13:55 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:26:39.499 15:13:55 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:26:39.499 15:13:55 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:39.499 15:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.499 15:13:55 -- common/autotest_common.sh@10 -- # set +x 00:26:39.499 [2024-04-18 15:13:55.039968] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:39.499 [2024-04-18 15:13:55.040732] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:39.499 [2024-04-18 15:13:55.040767] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:39.500 [2024-04-18 15:13:55.040799] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:39.500 [2024-04-18 15:13:55.040811] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:39.500 15:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.500 15:13:55 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:26:39.500 15:13:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.500 15:13:55 -- common/autotest_common.sh@10 -- # set +x 00:26:39.500 [2024-04-18 15:13:55.051900] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:39.500 [2024-04-18 15:13:55.052715] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:39.500 [2024-04-18 15:13:55.052769] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:39.500 15:13:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.500 15:13:55 -- host/mdns_discovery.sh@149 -- # sleep 1 00:26:39.500 [2024-04-18 15:13:55.185655] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:26:39.500 [2024-04-18 15:13:55.185937] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:26:39.760 [2024-04-18 15:13:55.247976] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:26:39.760 [2024-04-18 15:13:55.248035] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:39.760 [2024-04-18 15:13:55.248044] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:39.760 [2024-04-18 15:13:55.248071] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:39.760 [2024-04-18 15:13:55.248113] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:26:39.760 [2024-04-18 15:13:55.248122] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:26:39.760 [2024-04-18 15:13:55.248128] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:26:39.760 [2024-04-18 15:13:55.248140] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:39.760 [2024-04-18 15:13:55.294619] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:39.760 [2024-04-18 15:13:55.294661] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:39.760 [2024-04-18 15:13:55.294700] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:26:39.760 [2024-04-18 15:13:55.294707] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:40.697 15:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.697 15:13:56 -- common/autotest_common.sh@10 -- # set +x 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@68 -- # sort 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@68 -- # xargs 00:26:40.697 15:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:40.697 15:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.697 15:13:56 -- common/autotest_common.sh@10 -- # set +x 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@64 -- # sort 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@64 -- # xargs 00:26:40.697 15:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:26:40.697 15:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.697 15:13:56 -- common/autotest_common.sh@10 -- # set +x 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@72 -- # sort -n 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@72 -- # xargs 00:26:40.697 15:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:40.697 15:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.697 15:13:56 -- common/autotest_common.sh@10 -- # set +x 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@72 -- # sort -n 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@72 -- # xargs 00:26:40.697 15:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:26:40.697 15:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.697 15:13:56 -- common/autotest_common.sh@10 -- # set +x 00:26:40.697 15:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:40.697 15:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.697 15:13:56 -- common/autotest_common.sh@10 -- # set +x 00:26:40.697 [2024-04-18 15:13:56.359626] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:40.697 [2024-04-18 15:13:56.359677] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:40.697 [2024-04-18 15:13:56.359709] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:40.697 [2024-04-18 15:13:56.359720] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:40.697 15:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:26:40.697 15:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.697 15:13:56 -- common/autotest_common.sh@10 -- # set +x 00:26:40.697 [2024-04-18 15:13:56.364613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.697 [2024-04-18 15:13:56.364658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.697 [2024-04-18 15:13:56.364673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.697 [2024-04-18 15:13:56.364683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.697 [2024-04-18 15:13:56.364694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.697 [2024-04-18 15:13:56.364704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.697 [2024-04-18 15:13:56.364714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.697 [2024-04-18 15:13:56.364724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.697 [2024-04-18 15:13:56.364734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75220 is same with the state(5) to be set 00:26:40.697 [2024-04-18 15:13:56.366847] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:40.697 [2024-04-18 15:13:56.366895] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:40.697 [2024-04-18 15:13:56.368920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.697 [2024-04-18 15:13:56.368953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.697 [2024-04-18 15:13:56.368965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.697 [2024-04-18 15:13:56.368975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.697 [2024-04-18 15:13:56.368986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.697 [2024-04-18 15:13:56.368996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.697 [2024-04-18 15:13:56.369006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.697 [2024-04-18 15:13:56.369016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.697 [2024-04-18 15:13:56.369025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b62100 is same with the state(5) to be set 00:26:40.697 15:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.697 15:13:56 -- host/mdns_discovery.sh@162 -- # sleep 1 00:26:40.697 [2024-04-18 15:13:56.374529] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75220 (9): Bad file descriptor 00:26:40.697 [2024-04-18 15:13:56.378862] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b62100 (9): Bad file descriptor 00:26:40.697 [2024-04-18 15:13:56.384545] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:40.697 [2024-04-18 15:13:56.384703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.697 [2024-04-18 15:13:56.384750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.697 [2024-04-18 15:13:56.384764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b75220 with addr=10.0.0.2, port=4420 00:26:40.697 [2024-04-18 15:13:56.384776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75220 is same with the state(5) to be set 00:26:40.697 [2024-04-18 15:13:56.384794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75220 (9): Bad file descriptor 00:26:40.697 [2024-04-18 15:13:56.384827] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.697 [2024-04-18 15:13:56.384838] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.697 [2024-04-18 15:13:56.384850] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.697 [2024-04-18 15:13:56.384866] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.697 [2024-04-18 15:13:56.388859] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:40.697 [2024-04-18 15:13:56.388968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.697 [2024-04-18 15:13:56.389007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.697 [2024-04-18 15:13:56.389020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b62100 with addr=10.0.0.3, port=4420 00:26:40.697 [2024-04-18 15:13:56.389030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b62100 is same with the state(5) to be set 00:26:40.697 [2024-04-18 15:13:56.389045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b62100 (9): Bad file descriptor 00:26:40.697 [2024-04-18 15:13:56.389058] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:40.697 [2024-04-18 15:13:56.389067] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:40.697 [2024-04-18 15:13:56.389077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:40.697 [2024-04-18 15:13:56.389090] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.697 [2024-04-18 15:13:56.394600] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:40.698 [2024-04-18 15:13:56.394687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.698 [2024-04-18 15:13:56.394725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.698 [2024-04-18 15:13:56.394737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b75220 with addr=10.0.0.2, port=4420 00:26:40.698 [2024-04-18 15:13:56.394748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75220 is same with the state(5) to be set 00:26:40.698 [2024-04-18 15:13:56.394763] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75220 (9): Bad file descriptor 00:26:40.698 [2024-04-18 15:13:56.394777] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.698 [2024-04-18 15:13:56.394787] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.698 [2024-04-18 15:13:56.394796] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.698 [2024-04-18 15:13:56.394809] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.698 [2024-04-18 15:13:56.398907] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:40.698 [2024-04-18 15:13:56.399001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.698 [2024-04-18 15:13:56.399037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.698 [2024-04-18 15:13:56.399049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b62100 with addr=10.0.0.3, port=4420 00:26:40.698 [2024-04-18 15:13:56.399059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b62100 is same with the state(5) to be set 00:26:40.698 [2024-04-18 15:13:56.399073] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b62100 (9): Bad file descriptor 00:26:40.698 [2024-04-18 15:13:56.399085] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:40.698 [2024-04-18 15:13:56.399094] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:40.698 [2024-04-18 15:13:56.399103] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:40.698 [2024-04-18 15:13:56.399114] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.959 [2024-04-18 15:13:56.404637] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:40.959 [2024-04-18 15:13:56.404721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.404759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.404771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b75220 with addr=10.0.0.2, port=4420 00:26:40.959 [2024-04-18 15:13:56.404782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75220 is same with the state(5) to be set 00:26:40.959 [2024-04-18 15:13:56.404796] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75220 (9): Bad file descriptor 00:26:40.959 [2024-04-18 15:13:56.404811] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.959 [2024-04-18 15:13:56.404820] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.959 [2024-04-18 15:13:56.404830] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.959 [2024-04-18 15:13:56.404857] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.959 [2024-04-18 15:13:56.408940] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:40.959 [2024-04-18 15:13:56.409014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.409050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.409062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b62100 with addr=10.0.0.3, port=4420 00:26:40.959 [2024-04-18 15:13:56.409072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b62100 is same with the state(5) to be set 00:26:40.959 [2024-04-18 15:13:56.409086] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b62100 (9): Bad file descriptor 00:26:40.959 [2024-04-18 15:13:56.409100] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:40.959 [2024-04-18 15:13:56.409109] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:40.959 [2024-04-18 15:13:56.409118] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:40.959 [2024-04-18 15:13:56.409130] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.959 [2024-04-18 15:13:56.414679] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:40.959 [2024-04-18 15:13:56.414782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.414821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.414833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b75220 with addr=10.0.0.2, port=4420 00:26:40.959 [2024-04-18 15:13:56.414844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75220 is same with the state(5) to be set 00:26:40.959 [2024-04-18 15:13:56.414859] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75220 (9): Bad file descriptor 00:26:40.959 [2024-04-18 15:13:56.414888] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.959 [2024-04-18 15:13:56.414898] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.959 [2024-04-18 15:13:56.414908] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.959 [2024-04-18 15:13:56.414922] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.959 [2024-04-18 15:13:56.418970] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:40.959 [2024-04-18 15:13:56.419064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.419100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.419112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b62100 with addr=10.0.0.3, port=4420 00:26:40.959 [2024-04-18 15:13:56.419122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b62100 is same with the state(5) to be set 00:26:40.959 [2024-04-18 15:13:56.419136] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b62100 (9): Bad file descriptor 00:26:40.959 [2024-04-18 15:13:56.419149] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:40.959 [2024-04-18 15:13:56.419158] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:40.959 [2024-04-18 15:13:56.419167] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:40.959 [2024-04-18 15:13:56.419179] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.959 [2024-04-18 15:13:56.424728] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:40.959 [2024-04-18 15:13:56.424806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.424846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.424858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b75220 with addr=10.0.0.2, port=4420 00:26:40.959 [2024-04-18 15:13:56.424868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75220 is same with the state(5) to be set 00:26:40.959 [2024-04-18 15:13:56.424882] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75220 (9): Bad file descriptor 00:26:40.959 [2024-04-18 15:13:56.424911] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.959 [2024-04-18 15:13:56.424921] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.959 [2024-04-18 15:13:56.424931] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.959 [2024-04-18 15:13:56.424944] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.959 [2024-04-18 15:13:56.429017] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:40.959 [2024-04-18 15:13:56.429093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.429129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.429141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b62100 with addr=10.0.0.3, port=4420 00:26:40.959 [2024-04-18 15:13:56.429151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b62100 is same with the state(5) to be set 00:26:40.959 [2024-04-18 15:13:56.429165] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b62100 (9): Bad file descriptor 00:26:40.959 [2024-04-18 15:13:56.429177] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:40.959 [2024-04-18 15:13:56.429186] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:40.959 [2024-04-18 15:13:56.429196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:40.959 [2024-04-18 15:13:56.429207] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.959 [2024-04-18 15:13:56.434762] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:40.959 [2024-04-18 15:13:56.434839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.434875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.434888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b75220 with addr=10.0.0.2, port=4420 00:26:40.959 [2024-04-18 15:13:56.434898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75220 is same with the state(5) to be set 00:26:40.959 [2024-04-18 15:13:56.434911] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75220 (9): Bad file descriptor 00:26:40.959 [2024-04-18 15:13:56.434938] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.959 [2024-04-18 15:13:56.434949] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.959 [2024-04-18 15:13:56.434958] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.959 [2024-04-18 15:13:56.434971] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.959 [2024-04-18 15:13:56.439049] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:40.959 [2024-04-18 15:13:56.439152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.439190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.959 [2024-04-18 15:13:56.439202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b62100 with addr=10.0.0.3, port=4420 00:26:40.959 [2024-04-18 15:13:56.439213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b62100 is same with the state(5) to be set 00:26:40.959 [2024-04-18 15:13:56.439229] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b62100 (9): Bad file descriptor 00:26:40.959 [2024-04-18 15:13:56.439242] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:40.959 [2024-04-18 15:13:56.439251] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:40.959 [2024-04-18 15:13:56.439261] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:40.959 [2024-04-18 15:13:56.439273] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.959 [2024-04-18 15:13:56.444798] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:40.960 [2024-04-18 15:13:56.444877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.444913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.444925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b75220 with addr=10.0.0.2, port=4420 00:26:40.960 [2024-04-18 15:13:56.444935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75220 is same with the state(5) to be set 00:26:40.960 [2024-04-18 15:13:56.444949] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75220 (9): Bad file descriptor 00:26:40.960 [2024-04-18 15:13:56.444975] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.960 [2024-04-18 15:13:56.444984] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.960 [2024-04-18 15:13:56.444993] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.960 [2024-04-18 15:13:56.445005] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.960 [2024-04-18 15:13:56.449088] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:40.960 [2024-04-18 15:13:56.449153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.449189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.449200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b62100 with addr=10.0.0.3, port=4420 00:26:40.960 [2024-04-18 15:13:56.449210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b62100 is same with the state(5) to be set 00:26:40.960 [2024-04-18 15:13:56.449223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b62100 (9): Bad file descriptor 00:26:40.960 [2024-04-18 15:13:56.449236] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:40.960 [2024-04-18 15:13:56.449246] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:40.960 [2024-04-18 15:13:56.449255] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:40.960 [2024-04-18 15:13:56.449267] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.960 [2024-04-18 15:13:56.454832] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:40.960 [2024-04-18 15:13:56.454912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.454950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.454962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b75220 with addr=10.0.0.2, port=4420 00:26:40.960 [2024-04-18 15:13:56.454973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75220 is same with the state(5) to be set 00:26:40.960 [2024-04-18 15:13:56.454988] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75220 (9): Bad file descriptor 00:26:40.960 [2024-04-18 15:13:56.455077] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.960 [2024-04-18 15:13:56.455088] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.960 [2024-04-18 15:13:56.455098] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.960 [2024-04-18 15:13:56.455111] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.960 [2024-04-18 15:13:56.459116] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:40.960 [2024-04-18 15:13:56.459197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.459234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.459245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b62100 with addr=10.0.0.3, port=4420 00:26:40.960 [2024-04-18 15:13:56.459255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b62100 is same with the state(5) to be set 00:26:40.960 [2024-04-18 15:13:56.459270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b62100 (9): Bad file descriptor 00:26:40.960 [2024-04-18 15:13:56.459283] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:40.960 [2024-04-18 15:13:56.459292] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:40.960 [2024-04-18 15:13:56.459301] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:40.960 [2024-04-18 15:13:56.459313] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.960 [2024-04-18 15:13:56.464870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:40.960 [2024-04-18 15:13:56.464943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.464992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.465003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b75220 with addr=10.0.0.2, port=4420 00:26:40.960 [2024-04-18 15:13:56.465013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75220 is same with the state(5) to be set 00:26:40.960 [2024-04-18 15:13:56.465027] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75220 (9): Bad file descriptor 00:26:40.960 [2024-04-18 15:13:56.465053] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.960 [2024-04-18 15:13:56.465062] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.960 [2024-04-18 15:13:56.465071] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.960 [2024-04-18 15:13:56.465100] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.960 [2024-04-18 15:13:56.469153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:40.960 [2024-04-18 15:13:56.469241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.469277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.469289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b62100 with addr=10.0.0.3, port=4420 00:26:40.960 [2024-04-18 15:13:56.469311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b62100 is same with the state(5) to be set 00:26:40.960 [2024-04-18 15:13:56.469325] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b62100 (9): Bad file descriptor 00:26:40.960 [2024-04-18 15:13:56.469338] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:40.960 [2024-04-18 15:13:56.469347] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:40.960 [2024-04-18 15:13:56.469356] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:40.960 [2024-04-18 15:13:56.469368] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.960 [2024-04-18 15:13:56.474902] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:40.960 [2024-04-18 15:13:56.474981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.475019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.475031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b75220 with addr=10.0.0.2, port=4420 00:26:40.960 [2024-04-18 15:13:56.475042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75220 is same with the state(5) to be set 00:26:40.960 [2024-04-18 15:13:56.475056] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75220 (9): Bad file descriptor 00:26:40.960 [2024-04-18 15:13:56.475083] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.960 [2024-04-18 15:13:56.475105] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.960 [2024-04-18 15:13:56.475114] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.960 [2024-04-18 15:13:56.475127] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.960 [2024-04-18 15:13:56.479180] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:40.960 [2024-04-18 15:13:56.479257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.479293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.479305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b62100 with addr=10.0.0.3, port=4420 00:26:40.960 [2024-04-18 15:13:56.479315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b62100 is same with the state(5) to be set 00:26:40.960 [2024-04-18 15:13:56.479329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b62100 (9): Bad file descriptor 00:26:40.960 [2024-04-18 15:13:56.479342] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:40.960 [2024-04-18 15:13:56.479351] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:40.960 [2024-04-18 15:13:56.479361] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:40.960 [2024-04-18 15:13:56.479373] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.960 [2024-04-18 15:13:56.484941] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:40.960 [2024-04-18 15:13:56.485050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.485090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.485102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b75220 with addr=10.0.0.2, port=4420 00:26:40.960 [2024-04-18 15:13:56.485112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75220 is same with the state(5) to be set 00:26:40.960 [2024-04-18 15:13:56.485127] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75220 (9): Bad file descriptor 00:26:40.960 [2024-04-18 15:13:56.485157] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.960 [2024-04-18 15:13:56.485168] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.960 [2024-04-18 15:13:56.485178] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.960 [2024-04-18 15:13:56.485190] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.960 [2024-04-18 15:13:56.489212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:40.960 [2024-04-18 15:13:56.489294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.489334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.960 [2024-04-18 15:13:56.489347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b62100 with addr=10.0.0.3, port=4420 00:26:40.961 [2024-04-18 15:13:56.489356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b62100 is same with the state(5) to be set 00:26:40.961 [2024-04-18 15:13:56.489371] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b62100 (9): Bad file descriptor 00:26:40.961 [2024-04-18 15:13:56.489384] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:40.961 [2024-04-18 15:13:56.489393] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:40.961 [2024-04-18 15:13:56.489402] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:40.961 [2024-04-18 15:13:56.489415] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.961 [2024-04-18 15:13:56.494987] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:40.961 [2024-04-18 15:13:56.495095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.961 [2024-04-18 15:13:56.495133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.961 [2024-04-18 15:13:56.495145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b75220 with addr=10.0.0.2, port=4420 00:26:40.961 [2024-04-18 15:13:56.495156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b75220 is same with the state(5) to be set 00:26:40.961 [2024-04-18 15:13:56.495172] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b75220 (9): Bad file descriptor 00:26:40.961 [2024-04-18 15:13:56.495203] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:40.961 [2024-04-18 15:13:56.495213] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:40.961 [2024-04-18 15:13:56.495223] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:40.961 [2024-04-18 15:13:56.495236] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.961 [2024-04-18 15:13:56.497210] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:40.961 [2024-04-18 15:13:56.497239] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:40.961 [2024-04-18 15:13:56.497282] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:40.961 [2024-04-18 15:13:56.498222] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:26:40.961 [2024-04-18 15:13:56.498251] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:26:40.961 [2024-04-18 15:13:56.498269] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:40.961 [2024-04-18 15:13:56.583165] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:40.961 [2024-04-18 15:13:56.584136] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@68 -- # sort 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:26:41.897 15:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.897 15:13:57 -- common/autotest_common.sh@10 -- # set +x 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@68 -- # xargs 00:26:41.897 15:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@64 -- # sort 00:26:41.897 15:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@64 -- # xargs 00:26:41.897 15:13:57 -- common/autotest_common.sh@10 -- # set +x 00:26:41.897 15:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:26:41.897 15:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.897 15:13:57 -- common/autotest_common.sh@10 -- # set +x 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@72 -- # sort -n 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@72 -- # xargs 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:41.897 15:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:26:41.897 15:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.897 15:13:57 -- common/autotest_common.sh@10 -- # set +x 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@72 -- # sort -n 00:26:41.897 15:13:57 -- host/mdns_discovery.sh@72 -- # xargs 00:26:41.897 15:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.156 15:13:57 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:26:42.156 15:13:57 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:26:42.156 15:13:57 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:26:42.156 15:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.156 15:13:57 -- common/autotest_common.sh@10 -- # set +x 00:26:42.156 15:13:57 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:26:42.156 15:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.156 15:13:57 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:26:42.156 15:13:57 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:26:42.156 15:13:57 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:26:42.156 15:13:57 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:26:42.156 15:13:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.156 15:13:57 -- common/autotest_common.sh@10 -- # set +x 00:26:42.156 15:13:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.156 15:13:57 -- host/mdns_discovery.sh@172 -- # sleep 1 00:26:42.156 [2024-04-18 15:13:57.712566] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:26:43.107 15:13:58 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:26:43.107 15:13:58 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:26:43.107 15:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.107 15:13:58 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:26:43.107 15:13:58 -- host/mdns_discovery.sh@80 -- # xargs 00:26:43.107 15:13:58 -- common/autotest_common.sh@10 -- # set +x 00:26:43.107 15:13:58 -- host/mdns_discovery.sh@80 -- # sort 00:26:43.107 15:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.107 15:13:58 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:26:43.107 15:13:58 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:26:43.107 15:13:58 -- host/mdns_discovery.sh@68 -- # sort 00:26:43.107 15:13:58 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:43.107 15:13:58 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:26:43.107 15:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.107 15:13:58 -- common/autotest_common.sh@10 -- # set +x 00:26:43.108 15:13:58 -- host/mdns_discovery.sh@68 -- # xargs 00:26:43.108 15:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.108 15:13:58 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:26:43.108 15:13:58 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:26:43.108 15:13:58 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.108 15:13:58 -- host/mdns_discovery.sh@64 -- # sort 00:26:43.108 15:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.108 15:13:58 -- host/mdns_discovery.sh@64 -- # xargs 00:26:43.108 15:13:58 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:43.108 15:13:58 -- common/autotest_common.sh@10 -- # set +x 00:26:43.108 15:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.367 15:13:58 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:26:43.367 15:13:58 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:26:43.367 15:13:58 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:26:43.367 15:13:58 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:26:43.367 15:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.367 15:13:58 -- common/autotest_common.sh@10 -- # set +x 00:26:43.367 15:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.367 15:13:58 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:26:43.367 15:13:58 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:26:43.367 15:13:58 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:26:43.367 15:13:58 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:26:43.367 15:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.367 15:13:58 -- common/autotest_common.sh@10 -- # set +x 00:26:43.367 15:13:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.367 15:13:58 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:26:43.367 15:13:58 -- common/autotest_common.sh@638 -- # local es=0 00:26:43.367 15:13:58 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:26:43.367 15:13:58 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:43.367 15:13:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:43.367 15:13:58 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:43.367 15:13:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:43.367 15:13:58 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:26:43.367 15:13:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.367 15:13:58 -- common/autotest_common.sh@10 -- # set +x 00:26:43.367 [2024-04-18 15:13:58.910647] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:26:43.367 2024/04/18 15:13:58 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:26:43.367 request: 00:26:43.367 { 00:26:43.367 "method": "bdev_nvme_start_mdns_discovery", 00:26:43.367 "params": { 00:26:43.367 "name": "mdns", 00:26:43.367 "svcname": "_nvme-disc._http", 00:26:43.367 "hostnqn": "nqn.2021-12.io.spdk:test" 00:26:43.367 } 00:26:43.367 } 00:26:43.367 Got JSON-RPC error response 00:26:43.367 GoRPCClient: error on JSON-RPC call 00:26:43.367 15:13:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:43.367 15:13:58 -- common/autotest_common.sh@641 -- # es=1 00:26:43.367 15:13:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:43.367 15:13:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:43.367 15:13:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:43.367 15:13:58 -- host/mdns_discovery.sh@183 -- # sleep 5 00:26:43.628 [2024-04-18 15:13:59.294776] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:26:43.887 [2024-04-18 15:13:59.394594] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:26:43.887 [2024-04-18 15:13:59.494439] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:26:43.887 [2024-04-18 15:13:59.494476] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:26:43.887 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:26:43.887 cookie is 0 00:26:43.887 is_local: 1 00:26:43.887 our_own: 0 00:26:43.887 wide_area: 0 00:26:43.887 multicast: 1 00:26:43.887 cached: 1 00:26:44.146 [2024-04-18 15:13:59.594301] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:26:44.146 [2024-04-18 15:13:59.594353] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:26:44.146 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:26:44.146 cookie is 0 00:26:44.146 is_local: 1 00:26:44.146 our_own: 0 00:26:44.146 wide_area: 0 00:26:44.146 multicast: 1 00:26:44.146 cached: 1 00:26:45.083 [2024-04-18 15:14:00.497852] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:26:45.083 [2024-04-18 15:14:00.497908] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:26:45.083 [2024-04-18 15:14:00.497929] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:45.083 [2024-04-18 15:14:00.583831] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:26:45.083 [2024-04-18 15:14:00.597497] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:45.083 [2024-04-18 15:14:00.597547] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:45.083 [2024-04-18 15:14:00.597565] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:45.083 [2024-04-18 15:14:00.645831] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:26:45.083 [2024-04-18 15:14:00.645887] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:26:45.083 [2024-04-18 15:14:00.684038] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:26:45.083 [2024-04-18 15:14:00.743499] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:26:45.083 [2024-04-18 15:14:00.743582] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:48.445 15:14:03 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:26:48.445 15:14:03 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:26:48.445 15:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.445 15:14:03 -- common/autotest_common.sh@10 -- # set +x 00:26:48.445 15:14:03 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:26:48.445 15:14:03 -- host/mdns_discovery.sh@80 -- # sort 00:26:48.445 15:14:03 -- host/mdns_discovery.sh@80 -- # xargs 00:26:48.445 15:14:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.445 15:14:03 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:26:48.445 15:14:03 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:26:48.445 15:14:03 -- host/mdns_discovery.sh@76 -- # sort 00:26:48.445 15:14:03 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:48.445 15:14:03 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:26:48.445 15:14:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.445 15:14:03 -- common/autotest_common.sh@10 -- # set +x 00:26:48.445 15:14:03 -- host/mdns_discovery.sh@76 -- # xargs 00:26:48.445 15:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.445 15:14:04 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:26:48.445 15:14:04 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:26:48.445 15:14:04 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:48.445 15:14:04 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.445 15:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.445 15:14:04 -- host/mdns_discovery.sh@64 -- # xargs 00:26:48.445 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:26:48.445 15:14:04 -- host/mdns_discovery.sh@64 -- # sort 00:26:48.445 15:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.445 15:14:04 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:48.445 15:14:04 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:26:48.445 15:14:04 -- common/autotest_common.sh@638 -- # local es=0 00:26:48.445 15:14:04 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:26:48.445 15:14:04 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:48.445 15:14:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:48.445 15:14:04 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:48.445 15:14:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:48.445 15:14:04 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:26:48.445 15:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.445 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:26:48.445 [2024-04-18 15:14:04.119657] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:26:48.445 2024/04/18 15:14:04 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:26:48.445 request: 00:26:48.445 { 00:26:48.445 "method": "bdev_nvme_start_mdns_discovery", 00:26:48.445 "params": { 00:26:48.445 "name": "cdc", 00:26:48.445 "svcname": "_nvme-disc._tcp", 00:26:48.445 "hostnqn": "nqn.2021-12.io.spdk:test" 00:26:48.445 } 00:26:48.445 } 00:26:48.445 Got JSON-RPC error response 00:26:48.445 GoRPCClient: error on JSON-RPC call 00:26:48.445 15:14:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:48.445 15:14:04 -- common/autotest_common.sh@641 -- # es=1 00:26:48.445 15:14:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:48.445 15:14:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:48.445 15:14:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:48.446 15:14:04 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:26:48.446 15:14:04 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:48.446 15:14:04 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:26:48.446 15:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.446 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:26:48.446 15:14:04 -- host/mdns_discovery.sh@76 -- # xargs 00:26:48.446 15:14:04 -- host/mdns_discovery.sh@76 -- # sort 00:26:48.704 15:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.704 15:14:04 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:26:48.704 15:14:04 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:26:48.704 15:14:04 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:48.704 15:14:04 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.704 15:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.704 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:26:48.704 15:14:04 -- host/mdns_discovery.sh@64 -- # sort 00:26:48.704 15:14:04 -- host/mdns_discovery.sh@64 -- # xargs 00:26:48.704 15:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.704 15:14:04 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:48.704 15:14:04 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:26:48.704 15:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.704 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:26:48.704 15:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.704 15:14:04 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:26:48.704 15:14:04 -- host/mdns_discovery.sh@197 -- # kill 86288 00:26:48.704 15:14:04 -- host/mdns_discovery.sh@200 -- # wait 86288 00:26:48.704 [2024-04-18 15:14:04.296077] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:26:48.964 15:14:04 -- host/mdns_discovery.sh@201 -- # kill 86364 00:26:48.964 Got SIGTERM, quitting. 00:26:48.964 15:14:04 -- host/mdns_discovery.sh@202 -- # kill 86313 00:26:48.964 15:14:04 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:26:48.964 15:14:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:48.964 15:14:04 -- nvmf/common.sh@117 -- # sync 00:26:48.964 Got SIGTERM, quitting. 00:26:48.964 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:26:48.964 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:26:48.964 avahi-daemon 0.8 exiting. 00:26:48.964 15:14:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:48.964 15:14:04 -- nvmf/common.sh@120 -- # set +e 00:26:48.964 15:14:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:48.964 15:14:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:48.964 rmmod nvme_tcp 00:26:48.964 rmmod nvme_fabrics 00:26:48.964 rmmod nvme_keyring 00:26:48.964 15:14:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:48.964 15:14:04 -- nvmf/common.sh@124 -- # set -e 00:26:48.964 15:14:04 -- nvmf/common.sh@125 -- # return 0 00:26:48.964 15:14:04 -- nvmf/common.sh@478 -- # '[' -n 86238 ']' 00:26:48.964 15:14:04 -- nvmf/common.sh@479 -- # killprocess 86238 00:26:48.964 15:14:04 -- common/autotest_common.sh@936 -- # '[' -z 86238 ']' 00:26:48.964 15:14:04 -- common/autotest_common.sh@940 -- # kill -0 86238 00:26:48.964 15:14:04 -- common/autotest_common.sh@941 -- # uname 00:26:48.964 15:14:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:48.964 15:14:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86238 00:26:48.964 15:14:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:48.964 15:14:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:48.964 killing process with pid 86238 00:26:48.964 15:14:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86238' 00:26:48.964 15:14:04 -- common/autotest_common.sh@955 -- # kill 86238 00:26:48.964 15:14:04 -- common/autotest_common.sh@960 -- # wait 86238 00:26:49.223 15:14:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:49.223 15:14:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:49.223 15:14:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:49.223 15:14:04 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:49.223 15:14:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:49.223 15:14:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.223 15:14:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:49.223 15:14:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.223 15:14:04 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:49.223 00:26:49.223 real 0m20.799s 00:26:49.223 user 0m39.651s 00:26:49.223 sys 0m3.058s 00:26:49.223 15:14:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:49.223 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:26:49.223 ************************************ 00:26:49.223 END TEST nvmf_mdns_discovery 00:26:49.223 ************************************ 00:26:49.481 15:14:04 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:26:49.481 15:14:04 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:26:49.481 15:14:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:49.481 15:14:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:49.481 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:26:49.481 ************************************ 00:26:49.481 START TEST nvmf_multipath 00:26:49.481 ************************************ 00:26:49.481 15:14:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:26:49.740 * Looking for test storage... 00:26:49.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:49.740 15:14:05 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:49.740 15:14:05 -- nvmf/common.sh@7 -- # uname -s 00:26:49.740 15:14:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.740 15:14:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.740 15:14:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.740 15:14:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.740 15:14:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.740 15:14:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.740 15:14:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.740 15:14:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.740 15:14:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.740 15:14:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.740 15:14:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:26:49.740 15:14:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:26:49.740 15:14:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.740 15:14:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.740 15:14:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:49.740 15:14:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.740 15:14:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:49.740 15:14:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.740 15:14:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.740 15:14:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.740 15:14:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.740 15:14:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.740 15:14:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.740 15:14:05 -- paths/export.sh@5 -- # export PATH 00:26:49.740 15:14:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.740 15:14:05 -- nvmf/common.sh@47 -- # : 0 00:26:49.740 15:14:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:49.740 15:14:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:49.740 15:14:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.740 15:14:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.740 15:14:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.740 15:14:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:49.740 15:14:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:49.740 15:14:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:49.740 15:14:05 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:49.740 15:14:05 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:49.740 15:14:05 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:49.740 15:14:05 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:49.740 15:14:05 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:49.740 15:14:05 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:49.740 15:14:05 -- host/multipath.sh@30 -- # nvmftestinit 00:26:49.740 15:14:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:49.740 15:14:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.740 15:14:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:49.740 15:14:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:49.740 15:14:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:49.740 15:14:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.740 15:14:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:49.740 15:14:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.740 15:14:05 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:49.740 15:14:05 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:49.740 15:14:05 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:49.740 15:14:05 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:49.740 15:14:05 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:49.740 15:14:05 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:49.740 15:14:05 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.740 15:14:05 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.740 15:14:05 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:49.740 15:14:05 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:49.740 15:14:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:49.740 15:14:05 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:49.740 15:14:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:49.740 15:14:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.740 15:14:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:49.740 15:14:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:49.740 15:14:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:49.740 15:14:05 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:49.740 15:14:05 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:49.741 15:14:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:49.741 Cannot find device "nvmf_tgt_br" 00:26:49.741 15:14:05 -- nvmf/common.sh@155 -- # true 00:26:49.741 15:14:05 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:49.741 Cannot find device "nvmf_tgt_br2" 00:26:49.741 15:14:05 -- nvmf/common.sh@156 -- # true 00:26:49.741 15:14:05 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:49.741 15:14:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:49.741 Cannot find device "nvmf_tgt_br" 00:26:49.741 15:14:05 -- nvmf/common.sh@158 -- # true 00:26:49.741 15:14:05 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:49.741 Cannot find device "nvmf_tgt_br2" 00:26:49.741 15:14:05 -- nvmf/common.sh@159 -- # true 00:26:49.741 15:14:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:49.741 15:14:05 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:49.741 15:14:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:49.741 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:50.000 15:14:05 -- nvmf/common.sh@162 -- # true 00:26:50.000 15:14:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:50.000 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:50.000 15:14:05 -- nvmf/common.sh@163 -- # true 00:26:50.000 15:14:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:50.000 15:14:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:50.000 15:14:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:50.000 15:14:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:50.000 15:14:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:50.000 15:14:05 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:50.000 15:14:05 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:50.000 15:14:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:50.000 15:14:05 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:50.000 15:14:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:50.000 15:14:05 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:50.000 15:14:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:50.000 15:14:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:50.000 15:14:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:50.000 15:14:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:50.000 15:14:05 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:50.000 15:14:05 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:50.000 15:14:05 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:50.000 15:14:05 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:50.000 15:14:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:50.000 15:14:05 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:50.000 15:14:05 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:50.000 15:14:05 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:50.000 15:14:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:50.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:26:50.000 00:26:50.000 --- 10.0.0.2 ping statistics --- 00:26:50.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.000 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:26:50.000 15:14:05 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:50.000 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:50.000 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:26:50.000 00:26:50.000 --- 10.0.0.3 ping statistics --- 00:26:50.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.000 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:26:50.000 15:14:05 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:50.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:26:50.258 00:26:50.258 --- 10.0.0.1 ping statistics --- 00:26:50.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.258 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:26:50.258 15:14:05 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.258 15:14:05 -- nvmf/common.sh@422 -- # return 0 00:26:50.258 15:14:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:50.258 15:14:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.258 15:14:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:50.258 15:14:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:50.258 15:14:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.258 15:14:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:50.258 15:14:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:50.258 15:14:05 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:26:50.258 15:14:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:50.258 15:14:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:50.258 15:14:05 -- common/autotest_common.sh@10 -- # set +x 00:26:50.258 15:14:05 -- nvmf/common.sh@470 -- # nvmfpid=86887 00:26:50.258 15:14:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:50.258 15:14:05 -- nvmf/common.sh@471 -- # waitforlisten 86887 00:26:50.258 15:14:05 -- common/autotest_common.sh@817 -- # '[' -z 86887 ']' 00:26:50.258 15:14:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.258 15:14:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:50.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.258 15:14:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.258 15:14:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:50.258 15:14:05 -- common/autotest_common.sh@10 -- # set +x 00:26:50.258 [2024-04-18 15:14:05.805334] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:26:50.258 [2024-04-18 15:14:05.805423] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.258 [2024-04-18 15:14:05.947652] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:50.559 [2024-04-18 15:14:06.047862] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.559 [2024-04-18 15:14:06.048183] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.559 [2024-04-18 15:14:06.048254] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.559 [2024-04-18 15:14:06.048307] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.559 [2024-04-18 15:14:06.048358] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.559 [2024-04-18 15:14:06.048727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.559 [2024-04-18 15:14:06.048726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.127 15:14:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:51.127 15:14:06 -- common/autotest_common.sh@850 -- # return 0 00:26:51.127 15:14:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:51.127 15:14:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:51.127 15:14:06 -- common/autotest_common.sh@10 -- # set +x 00:26:51.127 15:14:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.127 15:14:06 -- host/multipath.sh@33 -- # nvmfapp_pid=86887 00:26:51.127 15:14:06 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:51.386 [2024-04-18 15:14:06.964162] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.386 15:14:06 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:51.645 Malloc0 00:26:51.645 15:14:07 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:51.903 15:14:07 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.163 15:14:07 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.163 [2024-04-18 15:14:07.811230] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.163 15:14:07 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:52.421 [2024-04-18 15:14:08.018994] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:52.421 15:14:08 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:52.421 15:14:08 -- host/multipath.sh@44 -- # bdevperf_pid=86985 00:26:52.421 15:14:08 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:52.421 15:14:08 -- host/multipath.sh@47 -- # waitforlisten 86985 /var/tmp/bdevperf.sock 00:26:52.421 15:14:08 -- common/autotest_common.sh@817 -- # '[' -z 86985 ']' 00:26:52.421 15:14:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:52.421 15:14:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:52.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:52.421 15:14:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:52.421 15:14:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:52.421 15:14:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.357 15:14:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:53.357 15:14:09 -- common/autotest_common.sh@850 -- # return 0 00:26:53.357 15:14:09 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:53.616 15:14:09 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:53.935 Nvme0n1 00:26:53.935 15:14:09 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:54.504 Nvme0n1 00:26:54.504 15:14:09 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:54.504 15:14:09 -- host/multipath.sh@78 -- # sleep 1 00:26:55.441 15:14:10 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:26:55.441 15:14:10 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:55.701 15:14:11 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:55.960 15:14:11 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:26:55.960 15:14:11 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86887 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:55.960 15:14:11 -- host/multipath.sh@65 -- # dtrace_pid=87072 00:26:55.960 15:14:11 -- host/multipath.sh@66 -- # sleep 6 00:27:02.530 15:14:17 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:02.530 15:14:17 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:02.530 15:14:17 -- host/multipath.sh@67 -- # active_port=4421 00:27:02.530 15:14:17 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:02.530 Attaching 4 probes... 00:27:02.530 @path[10.0.0.2, 4421]: 20564 00:27:02.530 @path[10.0.0.2, 4421]: 20699 00:27:02.530 @path[10.0.0.2, 4421]: 20947 00:27:02.530 @path[10.0.0.2, 4421]: 20280 00:27:02.530 @path[10.0.0.2, 4421]: 21253 00:27:02.530 15:14:17 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:02.530 15:14:17 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:02.530 15:14:17 -- host/multipath.sh@69 -- # sed -n 1p 00:27:02.530 15:14:17 -- host/multipath.sh@69 -- # port=4421 00:27:02.530 15:14:17 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:02.530 15:14:17 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:02.530 15:14:17 -- host/multipath.sh@72 -- # kill 87072 00:27:02.530 15:14:17 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:02.530 15:14:17 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:27:02.530 15:14:17 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:02.530 15:14:17 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:02.530 15:14:18 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:27:02.530 15:14:18 -- host/multipath.sh@65 -- # dtrace_pid=87204 00:27:02.530 15:14:18 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86887 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:02.530 15:14:18 -- host/multipath.sh@66 -- # sleep 6 00:27:09.116 15:14:24 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:09.116 15:14:24 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:09.116 15:14:24 -- host/multipath.sh@67 -- # active_port=4420 00:27:09.116 15:14:24 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:09.116 Attaching 4 probes... 00:27:09.116 @path[10.0.0.2, 4420]: 21300 00:27:09.116 @path[10.0.0.2, 4420]: 21808 00:27:09.116 @path[10.0.0.2, 4420]: 21949 00:27:09.116 @path[10.0.0.2, 4420]: 22107 00:27:09.116 @path[10.0.0.2, 4420]: 21903 00:27:09.116 15:14:24 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:09.116 15:14:24 -- host/multipath.sh@69 -- # sed -n 1p 00:27:09.116 15:14:24 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:09.116 15:14:24 -- host/multipath.sh@69 -- # port=4420 00:27:09.116 15:14:24 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:09.116 15:14:24 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:09.116 15:14:24 -- host/multipath.sh@72 -- # kill 87204 00:27:09.116 15:14:24 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:09.116 15:14:24 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:27:09.116 15:14:24 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:09.116 15:14:24 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:09.375 15:14:24 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:27:09.375 15:14:24 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86887 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:09.375 15:14:24 -- host/multipath.sh@65 -- # dtrace_pid=87340 00:27:09.375 15:14:24 -- host/multipath.sh@66 -- # sleep 6 00:27:15.949 15:14:30 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:15.949 15:14:30 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:15.949 15:14:31 -- host/multipath.sh@67 -- # active_port=4421 00:27:15.949 15:14:31 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:15.949 Attaching 4 probes... 00:27:15.949 @path[10.0.0.2, 4421]: 18293 00:27:15.949 @path[10.0.0.2, 4421]: 21774 00:27:15.949 @path[10.0.0.2, 4421]: 21564 00:27:15.949 @path[10.0.0.2, 4421]: 20057 00:27:15.949 @path[10.0.0.2, 4421]: 20117 00:27:15.949 15:14:31 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:15.949 15:14:31 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:15.949 15:14:31 -- host/multipath.sh@69 -- # sed -n 1p 00:27:15.949 15:14:31 -- host/multipath.sh@69 -- # port=4421 00:27:15.949 15:14:31 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:15.949 15:14:31 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:15.949 15:14:31 -- host/multipath.sh@72 -- # kill 87340 00:27:15.949 15:14:31 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:15.949 15:14:31 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:27:15.949 15:14:31 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:15.949 15:14:31 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:15.949 15:14:31 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:27:15.949 15:14:31 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86887 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:15.949 15:14:31 -- host/multipath.sh@65 -- # dtrace_pid=87470 00:27:15.950 15:14:31 -- host/multipath.sh@66 -- # sleep 6 00:27:22.558 15:14:37 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:27:22.558 15:14:37 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:22.558 15:14:37 -- host/multipath.sh@67 -- # active_port= 00:27:22.558 15:14:37 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:22.558 Attaching 4 probes... 00:27:22.558 00:27:22.558 00:27:22.558 00:27:22.558 00:27:22.558 00:27:22.558 15:14:37 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:22.558 15:14:37 -- host/multipath.sh@69 -- # sed -n 1p 00:27:22.558 15:14:37 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:22.558 15:14:37 -- host/multipath.sh@69 -- # port= 00:27:22.558 15:14:37 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:27:22.558 15:14:37 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:27:22.558 15:14:37 -- host/multipath.sh@72 -- # kill 87470 00:27:22.558 15:14:37 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:22.558 15:14:37 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:27:22.558 15:14:37 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:22.558 15:14:38 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:22.821 15:14:38 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:27:22.821 15:14:38 -- host/multipath.sh@65 -- # dtrace_pid=87605 00:27:22.821 15:14:38 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86887 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:22.821 15:14:38 -- host/multipath.sh@66 -- # sleep 6 00:27:29.449 15:14:44 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:29.450 15:14:44 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:29.450 15:14:44 -- host/multipath.sh@67 -- # active_port=4421 00:27:29.450 15:14:44 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:29.450 Attaching 4 probes... 00:27:29.450 @path[10.0.0.2, 4421]: 18153 00:27:29.450 @path[10.0.0.2, 4421]: 19701 00:27:29.450 @path[10.0.0.2, 4421]: 19671 00:27:29.450 @path[10.0.0.2, 4421]: 19826 00:27:29.450 @path[10.0.0.2, 4421]: 20111 00:27:29.450 15:14:44 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:29.450 15:14:44 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:29.450 15:14:44 -- host/multipath.sh@69 -- # sed -n 1p 00:27:29.450 15:14:44 -- host/multipath.sh@69 -- # port=4421 00:27:29.450 15:14:44 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:29.450 15:14:44 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:29.450 15:14:44 -- host/multipath.sh@72 -- # kill 87605 00:27:29.450 15:14:44 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:29.450 15:14:44 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:29.450 [2024-04-18 15:14:44.833328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.833625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.833792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.833844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.833957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.834011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.834060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.834151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.834202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.834297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.834347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.834429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.834527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.834656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.834707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.834756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.834804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.834903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.834953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835405] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.450 [2024-04-18 15:14:44.835990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836433] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 [2024-04-18 15:14:44.836482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2244ae0 is same with the state(5) to be set 00:27:29.451 15:14:44 -- host/multipath.sh@101 -- # sleep 1 00:27:30.387 15:14:45 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:27:30.387 15:14:45 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86887 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:30.387 15:14:45 -- host/multipath.sh@65 -- # dtrace_pid=87736 00:27:30.387 15:14:45 -- host/multipath.sh@66 -- # sleep 6 00:27:36.958 15:14:51 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:36.958 15:14:51 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:36.958 15:14:52 -- host/multipath.sh@67 -- # active_port=4420 00:27:36.958 15:14:52 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:36.958 Attaching 4 probes... 00:27:36.958 @path[10.0.0.2, 4420]: 19066 00:27:36.958 @path[10.0.0.2, 4420]: 18692 00:27:36.958 @path[10.0.0.2, 4420]: 20603 00:27:36.958 @path[10.0.0.2, 4420]: 21242 00:27:36.958 @path[10.0.0.2, 4420]: 20301 00:27:36.958 15:14:52 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:36.958 15:14:52 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:36.958 15:14:52 -- host/multipath.sh@69 -- # sed -n 1p 00:27:36.958 15:14:52 -- host/multipath.sh@69 -- # port=4420 00:27:36.958 15:14:52 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:36.958 15:14:52 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:36.958 15:14:52 -- host/multipath.sh@72 -- # kill 87736 00:27:36.958 15:14:52 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:36.958 15:14:52 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:36.958 [2024-04-18 15:14:52.375005] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:36.958 15:14:52 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:36.958 15:14:52 -- host/multipath.sh@111 -- # sleep 6 00:27:43.520 15:14:58 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:27:43.520 15:14:58 -- host/multipath.sh@65 -- # dtrace_pid=87928 00:27:43.520 15:14:58 -- host/multipath.sh@66 -- # sleep 6 00:27:43.520 15:14:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86887 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:50.129 15:15:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:50.129 15:15:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:50.129 15:15:04 -- host/multipath.sh@67 -- # active_port=4421 00:27:50.129 15:15:04 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:50.129 Attaching 4 probes... 00:27:50.129 @path[10.0.0.2, 4421]: 20051 00:27:50.129 @path[10.0.0.2, 4421]: 19840 00:27:50.129 @path[10.0.0.2, 4421]: 19422 00:27:50.129 @path[10.0.0.2, 4421]: 19656 00:27:50.129 @path[10.0.0.2, 4421]: 19756 00:27:50.129 15:15:04 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:50.129 15:15:04 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:50.129 15:15:04 -- host/multipath.sh@69 -- # sed -n 1p 00:27:50.129 15:15:04 -- host/multipath.sh@69 -- # port=4421 00:27:50.129 15:15:04 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:50.129 15:15:04 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:50.129 15:15:04 -- host/multipath.sh@72 -- # kill 87928 00:27:50.129 15:15:04 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:50.129 15:15:04 -- host/multipath.sh@114 -- # killprocess 86985 00:27:50.129 15:15:04 -- common/autotest_common.sh@936 -- # '[' -z 86985 ']' 00:27:50.129 15:15:04 -- common/autotest_common.sh@940 -- # kill -0 86985 00:27:50.129 15:15:04 -- common/autotest_common.sh@941 -- # uname 00:27:50.129 15:15:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:50.129 15:15:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86985 00:27:50.129 15:15:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:27:50.129 killing process with pid 86985 00:27:50.129 15:15:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:27:50.129 15:15:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86985' 00:27:50.129 15:15:04 -- common/autotest_common.sh@955 -- # kill 86985 00:27:50.129 15:15:04 -- common/autotest_common.sh@960 -- # wait 86985 00:27:50.129 Connection closed with partial response: 00:27:50.129 00:27:50.129 00:27:50.129 15:15:05 -- host/multipath.sh@116 -- # wait 86985 00:27:50.129 15:15:05 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:50.129 [2024-04-18 15:14:08.083088] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:27:50.129 [2024-04-18 15:14:08.083181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86985 ] 00:27:50.129 [2024-04-18 15:14:08.211568] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.129 [2024-04-18 15:14:08.325099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:50.129 Running I/O for 90 seconds... 00:27:50.129 [2024-04-18 15:14:18.132611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.129 [2024-04-18 15:14:18.132682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:50.129 [2024-04-18 15:14:18.132762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.129 [2024-04-18 15:14:18.132786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:50.129 [2024-04-18 15:14:18.133199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.129 [2024-04-18 15:14:18.133235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:50.129 [2024-04-18 15:14:18.133271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.129 [2024-04-18 15:14:18.133292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:50.129 [2024-04-18 15:14:18.133321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.133344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.133372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.133394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.133421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.133441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.133466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.133486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.133511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.133532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.133577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.133597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.133625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.133677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.133707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.133727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.133753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.133774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.133800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.133821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.133847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.133868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.133894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.133925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.133953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.133974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.134959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.134981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.135011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.135032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.135071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.135092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.135118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.130 [2024-04-18 15:14:18.135139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.135167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.130 [2024-04-18 15:14:18.135187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.135214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.130 [2024-04-18 15:14:18.135235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:50.130 [2024-04-18 15:14:18.135280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.130 [2024-04-18 15:14:18.135302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.135330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.135352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.135379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.135401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.135428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.135451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.135478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.135499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.135528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.135557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.135599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.135623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.135651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.135675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.135703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.135725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.135754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.135777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.135808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.135831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.135868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.135889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.135915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.135938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.135967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.135990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.136879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.131 [2024-04-18 15:14:18.136901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.137848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.131 [2024-04-18 15:14:18.137886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.137932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.131 [2024-04-18 15:14:18.137955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.137983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.131 [2024-04-18 15:14:18.138004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.138032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.131 [2024-04-18 15:14:18.138053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.138080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.131 [2024-04-18 15:14:18.138101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.138129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.131 [2024-04-18 15:14:18.138149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.138175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.131 [2024-04-18 15:14:18.138196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.131 [2024-04-18 15:14:18.138223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.131 [2024-04-18 15:14:18.138246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.138273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.138293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.138321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.138340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.138367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.138387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.138427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.138447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.138475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.138497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.138523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.138556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.138586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.138612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.138640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.138660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.138687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.132 [2024-04-18 15:14:18.138707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.138734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.132 [2024-04-18 15:14:18.138754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.138780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.132 [2024-04-18 15:14:18.138801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.138827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.132 [2024-04-18 15:14:18.138847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.138874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.132 [2024-04-18 15:14:18.138895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.138921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.132 [2024-04-18 15:14:18.138942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.138969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.132 [2024-04-18 15:14:18.138989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.139957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.139983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.140003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.140031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.140052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:50.132 [2024-04-18 15:14:18.140078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.132 [2024-04-18 15:14:18.140098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:18.140125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:18.140148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:18.140175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:18.140195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:18.140228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:18.140248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:18.140275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:18.140296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:18.140323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:18.140343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:18.140370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:18.140390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:18.140417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:18.140438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:18.140464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:18.140486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:18.140513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:18.140533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.618965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.133 [2024-04-18 15:14:24.619047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.619973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.619992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.620007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.620026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.620040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.620060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.620074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.620094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.620107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.620127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.620140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:50.133 [2024-04-18 15:14:24.620160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.133 [2024-04-18 15:14:24.620173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.134 [2024-04-18 15:14:24.620213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.134 [2024-04-18 15:14:24.620247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.134 [2024-04-18 15:14:24.620281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.134 [2024-04-18 15:14:24.620314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.134 [2024-04-18 15:14:24.620350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.134 [2024-04-18 15:14:24.620385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.134 [2024-04-18 15:14:24.620418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.134 [2024-04-18 15:14:24.620451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.134 [2024-04-18 15:14:24.620485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.134 [2024-04-18 15:14:24.620517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.134 [2024-04-18 15:14:24.620574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.620624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.620660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.620700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.620734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.620768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.620801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.620834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.620868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.620902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.620936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.620971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.620991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.621004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.621024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.621037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.621057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.621071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.621099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.621113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.621133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.621147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.621167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.621180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.621200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.621214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.623113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.623152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:50.134 [2024-04-18 15:14:24.623183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.134 [2024-04-18 15:14:24.623199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:24.623225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.135 [2024-04-18 15:14:24.623239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:24.623266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.135 [2024-04-18 15:14:24.623280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:24.623306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.135 [2024-04-18 15:14:24.623320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:24.623346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.135 [2024-04-18 15:14:24.623359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:24.623386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.135 [2024-04-18 15:14:24.623400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.591859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.591936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.591999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.592980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.592999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.593013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.593033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.593046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.593066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.593080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:50.135 [2024-04-18 15:14:31.594987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.135 [2024-04-18 15:14:31.595020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:31.595972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.595999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.136 [2024-04-18 15:14:31.596013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.596106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.136 [2024-04-18 15:14:31.596122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.596150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.136 [2024-04-18 15:14:31.596164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.596192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.136 [2024-04-18 15:14:31.596206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.596233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.136 [2024-04-18 15:14:31.596247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.596274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.136 [2024-04-18 15:14:31.596288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.596315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.136 [2024-04-18 15:14:31.596336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.596363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.136 [2024-04-18 15:14:31.596377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:31.596404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.136 [2024-04-18 15:14:31.596418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:44.833535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.136 [2024-04-18 15:14:44.833614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:44.833677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.136 [2024-04-18 15:14:44.833694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:44.833715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.136 [2024-04-18 15:14:44.833729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:44.833750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.136 [2024-04-18 15:14:44.833764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:44.833783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.136 [2024-04-18 15:14:44.833798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:44.833818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.136 [2024-04-18 15:14:44.833832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:50.136 [2024-04-18 15:14:44.833853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.136 [2024-04-18 15:14:44.833867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.833886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.833900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.833920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.833943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.833964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.833979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.837975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.837990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.838004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.838021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.838034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.838049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.838063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.838077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.838091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.838106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.838119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.838134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.838147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.838161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.838174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.838189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.838202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.838217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.838230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.838245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-04-18 15:14:44.838258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-04-18 15:14:44.838273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-04-18 15:14:44.838667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.838981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.838995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.839009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.839023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.839041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.839056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-04-18 15:14:44.839069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.839097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-04-18 15:14:44.839110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.839124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-04-18 15:14:44.839137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-04-18 15:14:44.839151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.839985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.839998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.840012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.840025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.840039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.840051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.840065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.840078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.840092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.840112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.840127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.840140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.840154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.840167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.840181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.840194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.840208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.840221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-04-18 15:14:44.840235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-04-18 15:14:44.840247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-04-18 15:14:44.840261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-04-18 15:14:44.840274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-04-18 15:14:44.840290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-04-18 15:14:44.840304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-04-18 15:14:44.840319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-04-18 15:14:44.840332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-04-18 15:14:44.840346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-04-18 15:14:44.840359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-04-18 15:14:44.840373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-04-18 15:14:44.840386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-04-18 15:14:44.840401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-04-18 15:14:44.840414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-04-18 15:14:44.840428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-04-18 15:14:44.840441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-04-18 15:14:44.840460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-04-18 15:14:44.840473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-04-18 15:14:44.840503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-04-18 15:14:44.840517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-04-18 15:14:44.840531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-04-18 15:14:44.840544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-04-18 15:14:44.840568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-04-18 15:14:44.840581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-04-18 15:14:44.840596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-04-18 15:14:44.840609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-04-18 15:14:44.840624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16be7c0 is same with the state(5) to be set 00:27:50.140 [2024-04-18 15:14:44.840684] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16be7c0 was disconnected and freed. reset controller. 00:27:50.140 [2024-04-18 15:14:44.841815] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.140 [2024-04-18 15:14:44.841872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-04-18 15:14:44.841889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-04-18 15:14:44.841921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185c900 (9): Bad file descriptor 00:27:50.140 [2024-04-18 15:14:44.842048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.140 [2024-04-18 15:14:44.842088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.140 [2024-04-18 15:14:44.842104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185c900 with addr=10.0.0.2, port=4421 00:27:50.140 [2024-04-18 15:14:44.842119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185c900 is same with the state(5) to be set 00:27:50.140 [2024-04-18 15:14:44.842140] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185c900 (9): Bad file descriptor 00:27:50.140 [2024-04-18 15:14:44.842160] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:50.140 [2024-04-18 15:14:44.842189] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:50.140 [2024-04-18 15:14:44.842211] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:50.140 [2024-04-18 15:14:44.842237] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:50.140 [2024-04-18 15:14:44.842249] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:50.140 [2024-04-18 15:14:54.881437] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:50.140 Received shutdown signal, test time was about 54.923636 seconds 00:27:50.140 00:27:50.140 Latency(us) 00:27:50.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.140 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:50.140 Verification LBA range: start 0x0 length 0x4000 00:27:50.140 Nvme0n1 : 54.92 8730.40 34.10 0.00 0.00 14636.25 381.64 7061253.96 00:27:50.140 =================================================================================================================== 00:27:50.140 Total : 8730.40 34.10 0.00 0.00 14636.25 381.64 7061253.96 00:27:50.140 15:15:05 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:50.140 15:15:05 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:27:50.140 15:15:05 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:50.140 15:15:05 -- host/multipath.sh@125 -- # nvmftestfini 00:27:50.140 15:15:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:50.140 15:15:05 -- nvmf/common.sh@117 -- # sync 00:27:50.140 15:15:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:50.140 15:15:05 -- nvmf/common.sh@120 -- # set +e 00:27:50.140 15:15:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:50.140 15:15:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:50.140 rmmod nvme_tcp 00:27:50.140 rmmod nvme_fabrics 00:27:50.140 rmmod nvme_keyring 00:27:50.140 15:15:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:50.140 15:15:05 -- nvmf/common.sh@124 -- # set -e 00:27:50.140 15:15:05 -- nvmf/common.sh@125 -- # return 0 00:27:50.140 15:15:05 -- nvmf/common.sh@478 -- # '[' -n 86887 ']' 00:27:50.140 15:15:05 -- nvmf/common.sh@479 -- # killprocess 86887 00:27:50.140 15:15:05 -- common/autotest_common.sh@936 -- # '[' -z 86887 ']' 00:27:50.140 15:15:05 -- common/autotest_common.sh@940 -- # kill -0 86887 00:27:50.140 15:15:05 -- common/autotest_common.sh@941 -- # uname 00:27:50.140 15:15:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:50.140 15:15:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86887 00:27:50.140 15:15:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:50.140 killing process with pid 86887 00:27:50.140 15:15:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:50.140 15:15:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86887' 00:27:50.140 15:15:05 -- common/autotest_common.sh@955 -- # kill 86887 00:27:50.140 15:15:05 -- common/autotest_common.sh@960 -- # wait 86887 00:27:50.140 15:15:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:50.140 15:15:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:50.140 15:15:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:50.140 15:15:05 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:50.140 15:15:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:50.140 15:15:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.140 15:15:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.140 15:15:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.399 15:15:05 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:50.399 00:27:50.399 real 1m0.793s 00:27:50.399 user 2m47.984s 00:27:50.399 sys 0m17.769s 00:27:50.399 15:15:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:50.399 ************************************ 00:27:50.399 END TEST nvmf_multipath 00:27:50.399 ************************************ 00:27:50.399 15:15:05 -- common/autotest_common.sh@10 -- # set +x 00:27:50.399 15:15:05 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:27:50.399 15:15:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:50.399 15:15:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:50.399 15:15:05 -- common/autotest_common.sh@10 -- # set +x 00:27:50.399 ************************************ 00:27:50.399 START TEST nvmf_timeout 00:27:50.399 ************************************ 00:27:50.399 15:15:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:27:50.658 * Looking for test storage... 00:27:50.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:50.658 15:15:06 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:50.658 15:15:06 -- nvmf/common.sh@7 -- # uname -s 00:27:50.658 15:15:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.658 15:15:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.658 15:15:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.658 15:15:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.658 15:15:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.658 15:15:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.658 15:15:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.658 15:15:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.658 15:15:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.658 15:15:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.658 15:15:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:27:50.658 15:15:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:27:50.658 15:15:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.658 15:15:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.658 15:15:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:50.658 15:15:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.658 15:15:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:50.658 15:15:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.658 15:15:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.658 15:15:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.658 15:15:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.658 15:15:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.658 15:15:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.658 15:15:06 -- paths/export.sh@5 -- # export PATH 00:27:50.658 15:15:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.658 15:15:06 -- nvmf/common.sh@47 -- # : 0 00:27:50.658 15:15:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:50.658 15:15:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:50.658 15:15:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.658 15:15:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.658 15:15:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.658 15:15:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:50.658 15:15:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:50.658 15:15:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:50.658 15:15:06 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:50.658 15:15:06 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:50.658 15:15:06 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:50.658 15:15:06 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:50.658 15:15:06 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:50.658 15:15:06 -- host/timeout.sh@19 -- # nvmftestinit 00:27:50.658 15:15:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:50.658 15:15:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.658 15:15:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:50.658 15:15:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:50.658 15:15:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:50.658 15:15:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.658 15:15:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.658 15:15:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.658 15:15:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:50.658 15:15:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:50.658 15:15:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:50.658 15:15:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:50.658 15:15:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:50.658 15:15:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:50.659 15:15:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.659 15:15:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.659 15:15:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:50.659 15:15:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:50.659 15:15:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:50.659 15:15:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:50.659 15:15:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:50.659 15:15:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.659 15:15:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:50.659 15:15:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:50.659 15:15:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:50.659 15:15:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:50.659 15:15:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:50.659 15:15:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:50.659 Cannot find device "nvmf_tgt_br" 00:27:50.659 15:15:06 -- nvmf/common.sh@155 -- # true 00:27:50.659 15:15:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:50.659 Cannot find device "nvmf_tgt_br2" 00:27:50.659 15:15:06 -- nvmf/common.sh@156 -- # true 00:27:50.659 15:15:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:50.659 15:15:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:50.659 Cannot find device "nvmf_tgt_br" 00:27:50.659 15:15:06 -- nvmf/common.sh@158 -- # true 00:27:50.659 15:15:06 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:50.659 Cannot find device "nvmf_tgt_br2" 00:27:50.659 15:15:06 -- nvmf/common.sh@159 -- # true 00:27:50.659 15:15:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:50.917 15:15:06 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:50.917 15:15:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:50.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:50.917 15:15:06 -- nvmf/common.sh@162 -- # true 00:27:50.917 15:15:06 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:50.917 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:50.917 15:15:06 -- nvmf/common.sh@163 -- # true 00:27:50.917 15:15:06 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:50.917 15:15:06 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:50.917 15:15:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:50.917 15:15:06 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:50.917 15:15:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:50.917 15:15:06 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:50.917 15:15:06 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:50.917 15:15:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:50.917 15:15:06 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:50.917 15:15:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:50.917 15:15:06 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:50.917 15:15:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:50.917 15:15:06 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:50.917 15:15:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:50.917 15:15:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:50.917 15:15:06 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:50.917 15:15:06 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:50.917 15:15:06 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:50.917 15:15:06 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:51.176 15:15:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:51.176 15:15:06 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:51.176 15:15:06 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:51.176 15:15:06 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:51.176 15:15:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:51.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:27:51.176 00:27:51.176 --- 10.0.0.2 ping statistics --- 00:27:51.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.176 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:27:51.176 15:15:06 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:51.176 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:51.176 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:27:51.176 00:27:51.176 --- 10.0.0.3 ping statistics --- 00:27:51.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.176 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:27:51.176 15:15:06 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:51.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:27:51.176 00:27:51.176 --- 10.0.0.1 ping statistics --- 00:27:51.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.176 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:27:51.176 15:15:06 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.176 15:15:06 -- nvmf/common.sh@422 -- # return 0 00:27:51.176 15:15:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:51.176 15:15:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.176 15:15:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:51.176 15:15:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:51.176 15:15:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.176 15:15:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:51.176 15:15:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:51.176 15:15:06 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:27:51.176 15:15:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:51.176 15:15:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:51.176 15:15:06 -- common/autotest_common.sh@10 -- # set +x 00:27:51.176 15:15:06 -- nvmf/common.sh@470 -- # nvmfpid=88260 00:27:51.176 15:15:06 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:51.176 15:15:06 -- nvmf/common.sh@471 -- # waitforlisten 88260 00:27:51.176 15:15:06 -- common/autotest_common.sh@817 -- # '[' -z 88260 ']' 00:27:51.176 15:15:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.176 15:15:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:51.176 15:15:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.176 15:15:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:51.176 15:15:06 -- common/autotest_common.sh@10 -- # set +x 00:27:51.176 [2024-04-18 15:15:06.802393] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:27:51.176 [2024-04-18 15:15:06.802485] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.435 [2024-04-18 15:15:06.949219] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:51.435 [2024-04-18 15:15:07.049864] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.435 [2024-04-18 15:15:07.049927] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.435 [2024-04-18 15:15:07.049938] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.435 [2024-04-18 15:15:07.049955] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.435 [2024-04-18 15:15:07.049963] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.435 [2024-04-18 15:15:07.050683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.435 [2024-04-18 15:15:07.050683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.002 15:15:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:52.002 15:15:07 -- common/autotest_common.sh@850 -- # return 0 00:27:52.002 15:15:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:52.002 15:15:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:52.002 15:15:07 -- common/autotest_common.sh@10 -- # set +x 00:27:52.261 15:15:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.261 15:15:07 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:52.261 15:15:07 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:52.261 [2024-04-18 15:15:07.922652] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.261 15:15:07 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:52.520 Malloc0 00:27:52.520 15:15:08 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:52.778 15:15:08 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:53.036 15:15:08 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:53.293 [2024-04-18 15:15:08.809731] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.293 15:15:08 -- host/timeout.sh@32 -- # bdevperf_pid=88347 00:27:53.293 15:15:08 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:27:53.293 15:15:08 -- host/timeout.sh@34 -- # waitforlisten 88347 /var/tmp/bdevperf.sock 00:27:53.293 15:15:08 -- common/autotest_common.sh@817 -- # '[' -z 88347 ']' 00:27:53.293 15:15:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:53.293 15:15:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:53.293 15:15:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:53.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:53.293 15:15:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:53.293 15:15:08 -- common/autotest_common.sh@10 -- # set +x 00:27:53.293 [2024-04-18 15:15:08.883066] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:27:53.293 [2024-04-18 15:15:08.883140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88347 ] 00:27:53.551 [2024-04-18 15:15:09.025253] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.551 [2024-04-18 15:15:09.125507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.118 15:15:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:54.118 15:15:09 -- common/autotest_common.sh@850 -- # return 0 00:27:54.118 15:15:09 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:54.376 15:15:10 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:27:54.634 NVMe0n1 00:27:54.634 15:15:10 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:54.634 15:15:10 -- host/timeout.sh@51 -- # rpc_pid=88400 00:27:54.634 15:15:10 -- host/timeout.sh@53 -- # sleep 1 00:27:54.893 Running I/O for 10 seconds... 00:27:55.828 15:15:11 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:56.089 [2024-04-18 15:15:11.599231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.599987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.600964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.601006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.601048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.601091] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddcc0 is same with the state(5) to be set 00:27:56.089 [2024-04-18 15:15:11.601666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.601709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.601735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.601746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.601758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.601768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.601779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.601789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.601801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.601811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.601822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.601832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.601843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.601852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.601863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.601873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.601884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.601895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.601906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.601916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.601927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.601936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.601947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.601968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.601979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.601989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.602000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.602009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.602022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.602032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.602042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.602052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.602063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.602073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.602083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.602093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.602104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.602113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.602124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.602134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.602146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.602156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.602167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.602176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.602188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.602197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.602208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.602217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.602230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.602239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.089 [2024-04-18 15:15:11.602250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.089 [2024-04-18 15:15:11.602260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.602980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.602989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.090 [2024-04-18 15:15:11.603722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.090 [2024-04-18 15:15:11.603743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.090 [2024-04-18 15:15:11.603764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.090 [2024-04-18 15:15:11.603785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.090 [2024-04-18 15:15:11.603807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.090 [2024-04-18 15:15:11.603827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.090 [2024-04-18 15:15:11.603847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:56.090 [2024-04-18 15:15:11.603868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.090 [2024-04-18 15:15:11.603879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.603888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.603899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.603909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.603920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.603929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.603942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.603951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.603963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.603972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.603984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.603994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.091 [2024-04-18 15:15:11.604415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2486f70 is same with the state(5) to be set 00:27:56.091 [2024-04-18 15:15:11.604440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:56.091 [2024-04-18 15:15:11.604448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:56.091 [2024-04-18 15:15:11.604457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96608 len:8 PRP1 0x0 PRP2 0x0 00:27:56.091 [2024-04-18 15:15:11.604466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.091 [2024-04-18 15:15:11.604526] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2486f70 was disconnected and freed. reset controller. 00:27:56.091 [2024-04-18 15:15:11.604781] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.091 [2024-04-18 15:15:11.604866] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241ddc0 (9): Bad file descriptor 00:27:56.091 [2024-04-18 15:15:11.609805] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241ddc0 (9): Bad file descriptor 00:27:56.091 [2024-04-18 15:15:11.609855] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:56.091 [2024-04-18 15:15:11.609869] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:56.091 [2024-04-18 15:15:11.609883] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:56.091 [2024-04-18 15:15:11.609907] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.091 [2024-04-18 15:15:11.609921] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:56.091 15:15:11 -- host/timeout.sh@56 -- # sleep 2 00:27:57.995 [2024-04-18 15:15:13.606859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.996 [2024-04-18 15:15:13.606973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:57.996 [2024-04-18 15:15:13.606990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241ddc0 with addr=10.0.0.2, port=4420 00:27:57.996 [2024-04-18 15:15:13.607006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ddc0 is same with the state(5) to be set 00:27:57.996 [2024-04-18 15:15:13.607038] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241ddc0 (9): Bad file descriptor 00:27:57.996 [2024-04-18 15:15:13.607059] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:57.996 [2024-04-18 15:15:13.607069] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:57.996 [2024-04-18 15:15:13.607080] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:57.996 [2024-04-18 15:15:13.607111] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:57.996 [2024-04-18 15:15:13.607123] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:57.996 15:15:13 -- host/timeout.sh@57 -- # get_controller 00:27:57.996 15:15:13 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:57.996 15:15:13 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:27:58.254 15:15:13 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:27:58.254 15:15:13 -- host/timeout.sh@58 -- # get_bdev 00:27:58.254 15:15:13 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:27:58.255 15:15:13 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:27:58.513 15:15:14 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:27:58.513 15:15:14 -- host/timeout.sh@61 -- # sleep 5 00:28:00.442 [2024-04-18 15:15:15.604080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.442 [2024-04-18 15:15:15.604194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:00.442 [2024-04-18 15:15:15.604209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241ddc0 with addr=10.0.0.2, port=4420 00:28:00.442 [2024-04-18 15:15:15.604224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ddc0 is same with the state(5) to be set 00:28:00.442 [2024-04-18 15:15:15.604256] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241ddc0 (9): Bad file descriptor 00:28:00.442 [2024-04-18 15:15:15.604276] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:00.442 [2024-04-18 15:15:15.604286] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:00.443 [2024-04-18 15:15:15.604299] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:00.443 [2024-04-18 15:15:15.604330] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:00.443 [2024-04-18 15:15:15.604341] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.366 [2024-04-18 15:15:17.601165] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.933 00:28:02.933 Latency(us) 00:28:02.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.933 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:02.933 Verification LBA range: start 0x0 length 0x4000 00:28:02.933 NVMe0n1 : 8.18 1471.81 5.75 15.64 0.00 86062.74 1789.74 7061253.96 00:28:02.933 =================================================================================================================== 00:28:02.933 Total : 1471.81 5.75 15.64 0.00 86062.74 1789.74 7061253.96 00:28:02.933 0 00:28:03.502 15:15:19 -- host/timeout.sh@62 -- # get_controller 00:28:03.502 15:15:19 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:03.502 15:15:19 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:03.761 15:15:19 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:28:03.761 15:15:19 -- host/timeout.sh@63 -- # get_bdev 00:28:03.761 15:15:19 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:03.761 15:15:19 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:04.020 15:15:19 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:28:04.020 15:15:19 -- host/timeout.sh@65 -- # wait 88400 00:28:04.020 15:15:19 -- host/timeout.sh@67 -- # killprocess 88347 00:28:04.020 15:15:19 -- common/autotest_common.sh@936 -- # '[' -z 88347 ']' 00:28:04.020 15:15:19 -- common/autotest_common.sh@940 -- # kill -0 88347 00:28:04.020 15:15:19 -- common/autotest_common.sh@941 -- # uname 00:28:04.020 15:15:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:04.020 15:15:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88347 00:28:04.278 killing process with pid 88347 00:28:04.278 Received shutdown signal, test time was about 9.317763 seconds 00:28:04.278 00:28:04.278 Latency(us) 00:28:04.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.278 =================================================================================================================== 00:28:04.278 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:04.278 15:15:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:28:04.278 15:15:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:28:04.278 15:15:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88347' 00:28:04.278 15:15:19 -- common/autotest_common.sh@955 -- # kill 88347 00:28:04.278 15:15:19 -- common/autotest_common.sh@960 -- # wait 88347 00:28:04.278 15:15:19 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.538 [2024-04-18 15:15:20.182402] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.538 15:15:20 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:04.538 15:15:20 -- host/timeout.sh@74 -- # bdevperf_pid=88552 00:28:04.538 15:15:20 -- host/timeout.sh@76 -- # waitforlisten 88552 /var/tmp/bdevperf.sock 00:28:04.538 15:15:20 -- common/autotest_common.sh@817 -- # '[' -z 88552 ']' 00:28:04.538 15:15:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:04.538 15:15:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:04.538 15:15:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:04.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:04.538 15:15:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:04.538 15:15:20 -- common/autotest_common.sh@10 -- # set +x 00:28:04.797 [2024-04-18 15:15:20.249973] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:28:04.797 [2024-04-18 15:15:20.250097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88552 ] 00:28:04.797 [2024-04-18 15:15:20.398080] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.056 [2024-04-18 15:15:20.515431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.624 15:15:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:05.624 15:15:21 -- common/autotest_common.sh@850 -- # return 0 00:28:05.624 15:15:21 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:05.884 15:15:21 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:28:06.190 NVMe0n1 00:28:06.190 15:15:21 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:06.190 15:15:21 -- host/timeout.sh@84 -- # rpc_pid=88601 00:28:06.190 15:15:21 -- host/timeout.sh@86 -- # sleep 1 00:28:06.190 Running I/O for 10 seconds... 00:28:07.146 15:15:22 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:07.408 [2024-04-18 15:15:23.024648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024885] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.024894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd16d0 is same with the state(5) to be set 00:28:07.408 [2024-04-18 15:15:23.025403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.408 [2024-04-18 15:15:23.025439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.408 [2024-04-18 15:15:23.025474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.408 [2024-04-18 15:15:23.025496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.408 [2024-04-18 15:15:23.025517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.408 [2024-04-18 15:15:23.025547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.408 [2024-04-18 15:15:23.025569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.408 [2024-04-18 15:15:23.025590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.408 [2024-04-18 15:15:23.025976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.408 [2024-04-18 15:15:23.025986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.409 [2024-04-18 15:15:23.026023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.409 [2024-04-18 15:15:23.026043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.409 [2024-04-18 15:15:23.026064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.409 [2024-04-18 15:15:23.026084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.409 [2024-04-18 15:15:23.026103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.409 [2024-04-18 15:15:23.026124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.409 [2024-04-18 15:15:23.026145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.409 [2024-04-18 15:15:23.026166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.409 [2024-04-18 15:15:23.026187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.409 [2024-04-18 15:15:23.026517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.409 [2024-04-18 15:15:23.026821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.409 [2024-04-18 15:15:23.026833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.410 [2024-04-18 15:15:23.026842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.026853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.026863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.026875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.026884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.026895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.026904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.026915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.026924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.026935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.026944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.026955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.026964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.026975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.026984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.026995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-04-18 15:15:23.027656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.410 [2024-04-18 15:15:23.027666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.411 [2024-04-18 15:15:23.027675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.027686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.027695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.027705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.027717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.027727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.027737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.027747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.027756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.027767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.027777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.027788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.027798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.027809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.027818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.027829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.027838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.027849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.027858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.027869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.027877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.027888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.027914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.027925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.027934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.027945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.027954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.027965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.027974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.027985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.027994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.028005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.028014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.028025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.028034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.028045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.028056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.028067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.028076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.028087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.028096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.028107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.028116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.028127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.028138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.028149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.411 [2024-04-18 15:15:23.028159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.028169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86f70 is same with the state(5) to be set 00:28:07.411 [2024-04-18 15:15:23.028182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:07.411 [2024-04-18 15:15:23.028190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:07.411 [2024-04-18 15:15:23.028199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99272 len:8 PRP1 0x0 PRP2 0x0 00:28:07.411 [2024-04-18 15:15:23.028219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.028271] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc86f70 was disconnected and freed. reset controller. 00:28:07.411 [2024-04-18 15:15:23.028347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.411 [2024-04-18 15:15:23.028360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.028371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.411 [2024-04-18 15:15:23.028381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.028391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.411 [2024-04-18 15:15:23.028400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.028411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:07.411 [2024-04-18 15:15:23.028420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.411 [2024-04-18 15:15:23.028430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1ddc0 is same with the state(5) to be set 00:28:07.411 [2024-04-18 15:15:23.028638] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.411 [2024-04-18 15:15:23.028660] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1ddc0 (9): Bad file descriptor 00:28:07.411 [2024-04-18 15:15:23.028765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-04-18 15:15:23.028802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.411 [2024-04-18 15:15:23.028818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc1ddc0 with addr=10.0.0.2, port=4420 00:28:07.411 [2024-04-18 15:15:23.028829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1ddc0 is same with the state(5) to be set 00:28:07.411 [2024-04-18 15:15:23.028847] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1ddc0 (9): Bad file descriptor 00:28:07.411 [2024-04-18 15:15:23.028861] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:07.411 [2024-04-18 15:15:23.028871] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:07.411 [2024-04-18 15:15:23.028883] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:07.411 [2024-04-18 15:15:23.028902] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:07.411 [2024-04-18 15:15:23.028912] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:07.411 15:15:23 -- host/timeout.sh@90 -- # sleep 1 00:28:08.374 [2024-04-18 15:15:24.027458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.374 [2024-04-18 15:15:24.027578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.374 [2024-04-18 15:15:24.027594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc1ddc0 with addr=10.0.0.2, port=4420 00:28:08.374 [2024-04-18 15:15:24.027608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1ddc0 is same with the state(5) to be set 00:28:08.374 [2024-04-18 15:15:24.027645] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1ddc0 (9): Bad file descriptor 00:28:08.374 [2024-04-18 15:15:24.027664] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.374 [2024-04-18 15:15:24.027674] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.374 [2024-04-18 15:15:24.027685] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.374 [2024-04-18 15:15:24.027711] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.374 [2024-04-18 15:15:24.027721] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.374 15:15:24 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.633 [2024-04-18 15:15:24.254761] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.633 15:15:24 -- host/timeout.sh@92 -- # wait 88601 00:28:09.569 [2024-04-18 15:15:25.042473] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:16.212 00:28:16.212 Latency(us) 00:28:16.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.212 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:16.212 Verification LBA range: start 0x0 length 0x4000 00:28:16.212 NVMe0n1 : 10.01 7705.64 30.10 0.00 0.00 16582.45 1487.06 3018551.31 00:28:16.212 =================================================================================================================== 00:28:16.212 Total : 7705.64 30.10 0.00 0.00 16582.45 1487.06 3018551.31 00:28:16.212 0 00:28:16.212 15:15:31 -- host/timeout.sh@97 -- # rpc_pid=88723 00:28:16.212 15:15:31 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:16.212 15:15:31 -- host/timeout.sh@98 -- # sleep 1 00:28:16.497 Running I/O for 10 seconds... 00:28:17.438 15:15:32 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:17.438 [2024-04-18 15:15:33.067888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.067972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.067991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.438 [2024-04-18 15:15:33.068510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068664] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.068838] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1190 is same with the state(5) to be set 00:28:17.439 [2024-04-18 15:15:33.069144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.439 [2024-04-18 15:15:33.069701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.439 [2024-04-18 15:15:33.069711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.069722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.069732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.069742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.069752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.069762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.069772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.069783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.069792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.069803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.069811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.069822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.069832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.069843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.069852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.069863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.069872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.069883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.069892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.069902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.069911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.069922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.069931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.069941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.069950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.069961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.069980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.069991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.440 [2024-04-18 15:15:33.070450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.440 [2024-04-18 15:15:33.070460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.070469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.070497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.441 [2024-04-18 15:15:33.070519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.441 [2024-04-18 15:15:33.070540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.441 [2024-04-18 15:15:33.070568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.441 [2024-04-18 15:15:33.070589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.441 [2024-04-18 15:15:33.070610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.441 [2024-04-18 15:15:33.070631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.441 [2024-04-18 15:15:33.070652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.441 [2024-04-18 15:15:33.070674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.441 [2024-04-18 15:15:33.070695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.441 [2024-04-18 15:15:33.070716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.070736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.070756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.070776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.070797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.070818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.070839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.070861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.070881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.070905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.070925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.070946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.070967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.070987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.070998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.071007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.071018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.071028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.071039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.071061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.071072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.071081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.071091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.071100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.071111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.071120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.071131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.071140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.071150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.071166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.071177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.071186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.071213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.071223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.071234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.441 [2024-04-18 15:15:33.071244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.441 [2024-04-18 15:15:33.071255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.442 [2024-04-18 15:15:33.071873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc65440 is same with the state(5) to be set 00:28:17.442 [2024-04-18 15:15:33.071897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:17.442 [2024-04-18 15:15:33.071905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:17.442 [2024-04-18 15:15:33.071913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95320 len:8 PRP1 0x0 PRP2 0x0 00:28:17.442 [2024-04-18 15:15:33.071922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.442 [2024-04-18 15:15:33.071984] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc65440 was disconnected and freed. reset controller. 00:28:17.442 [2024-04-18 15:15:33.072213] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:17.442 [2024-04-18 15:15:33.072293] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1ddc0 (9): Bad file descriptor 00:28:17.442 [2024-04-18 15:15:33.072386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.442 [2024-04-18 15:15:33.072424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:17.442 [2024-04-18 15:15:33.072437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc1ddc0 with addr=10.0.0.2, port=4420 00:28:17.442 [2024-04-18 15:15:33.072448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1ddc0 is same with the state(5) to be set 00:28:17.442 [2024-04-18 15:15:33.072464] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1ddc0 (9): Bad file descriptor 00:28:17.443 [2024-04-18 15:15:33.072478] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:17.443 [2024-04-18 15:15:33.072488] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:17.443 [2024-04-18 15:15:33.072501] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:17.443 [2024-04-18 15:15:33.072520] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:17.443 [2024-04-18 15:15:33.072530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:17.443 15:15:33 -- host/timeout.sh@101 -- # sleep 3 00:28:18.378 [2024-04-18 15:15:34.071071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-18 15:15:34.071176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.378 [2024-04-18 15:15:34.071192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc1ddc0 with addr=10.0.0.2, port=4420 00:28:18.378 [2024-04-18 15:15:34.071219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1ddc0 is same with the state(5) to be set 00:28:18.378 [2024-04-18 15:15:34.071245] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1ddc0 (9): Bad file descriptor 00:28:18.378 [2024-04-18 15:15:34.071263] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:18.378 [2024-04-18 15:15:34.071274] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:18.378 [2024-04-18 15:15:34.071285] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:18.378 [2024-04-18 15:15:34.071309] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:18.378 [2024-04-18 15:15:34.071319] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.757 [2024-04-18 15:15:35.069856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.757 [2024-04-18 15:15:35.069965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.757 [2024-04-18 15:15:35.069993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc1ddc0 with addr=10.0.0.2, port=4420 00:28:19.757 [2024-04-18 15:15:35.070009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1ddc0 is same with the state(5) to be set 00:28:19.757 [2024-04-18 15:15:35.070036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1ddc0 (9): Bad file descriptor 00:28:19.757 [2024-04-18 15:15:35.070066] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.757 [2024-04-18 15:15:35.070078] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.757 [2024-04-18 15:15:35.070089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.757 [2024-04-18 15:15:35.070114] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.757 [2024-04-18 15:15:35.070124] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.695 [2024-04-18 15:15:36.071106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.695 [2024-04-18 15:15:36.071207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:20.695 [2024-04-18 15:15:36.071223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc1ddc0 with addr=10.0.0.2, port=4420 00:28:20.695 [2024-04-18 15:15:36.071238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1ddc0 is same with the state(5) to be set 00:28:20.695 [2024-04-18 15:15:36.071444] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1ddc0 (9): Bad file descriptor 00:28:20.695 [2024-04-18 15:15:36.071652] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:20.695 [2024-04-18 15:15:36.071665] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:20.695 [2024-04-18 15:15:36.071676] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.695 [2024-04-18 15:15:36.074694] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:20.695 [2024-04-18 15:15:36.074739] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.695 15:15:36 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.695 [2024-04-18 15:15:36.341640] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.695 15:15:36 -- host/timeout.sh@103 -- # wait 88723 00:28:21.633 [2024-04-18 15:15:37.109707] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:26.918 00:28:26.918 Latency(us) 00:28:26.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.918 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:26.918 Verification LBA range: start 0x0 length 0x4000 00:28:26.918 NVMe0n1 : 10.00 6185.80 24.16 4736.44 0.00 11691.61 572.45 3018551.31 00:28:26.918 =================================================================================================================== 00:28:26.918 Total : 6185.80 24.16 4736.44 0.00 11691.61 0.00 3018551.31 00:28:26.918 0 00:28:26.918 15:15:41 -- host/timeout.sh@105 -- # killprocess 88552 00:28:26.918 15:15:41 -- common/autotest_common.sh@936 -- # '[' -z 88552 ']' 00:28:26.918 15:15:41 -- common/autotest_common.sh@940 -- # kill -0 88552 00:28:26.918 15:15:41 -- common/autotest_common.sh@941 -- # uname 00:28:26.918 15:15:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:26.918 15:15:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88552 00:28:26.918 killing process with pid 88552 00:28:26.918 Received shutdown signal, test time was about 10.000000 seconds 00:28:26.918 00:28:26.918 Latency(us) 00:28:26.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.918 =================================================================================================================== 00:28:26.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:26.918 15:15:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:28:26.918 15:15:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:28:26.918 15:15:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88552' 00:28:26.918 15:15:41 -- common/autotest_common.sh@955 -- # kill 88552 00:28:26.918 15:15:41 -- common/autotest_common.sh@960 -- # wait 88552 00:28:26.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:26.918 15:15:42 -- host/timeout.sh@110 -- # bdevperf_pid=88844 00:28:26.918 15:15:42 -- host/timeout.sh@112 -- # waitforlisten 88844 /var/tmp/bdevperf.sock 00:28:26.918 15:15:42 -- common/autotest_common.sh@817 -- # '[' -z 88844 ']' 00:28:26.918 15:15:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:26.918 15:15:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:26.918 15:15:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:26.918 15:15:42 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:28:26.918 15:15:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:26.918 15:15:42 -- common/autotest_common.sh@10 -- # set +x 00:28:26.918 [2024-04-18 15:15:42.250907] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:28:26.918 [2024-04-18 15:15:42.250989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88844 ] 00:28:26.918 [2024-04-18 15:15:42.391804] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.918 [2024-04-18 15:15:42.495105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.486 15:15:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:27.486 15:15:43 -- common/autotest_common.sh@850 -- # return 0 00:28:27.486 15:15:43 -- host/timeout.sh@116 -- # dtrace_pid=88872 00:28:27.486 15:15:43 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88844 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:28:27.486 15:15:43 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:28:27.742 15:15:43 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:27.998 NVMe0n1 00:28:27.998 15:15:43 -- host/timeout.sh@124 -- # rpc_pid=88924 00:28:27.998 15:15:43 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:27.998 15:15:43 -- host/timeout.sh@125 -- # sleep 1 00:28:28.255 Running I/O for 10 seconds... 00:28:29.190 15:15:44 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:29.190 [2024-04-18 15:15:44.846526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.190 [2024-04-18 15:15:44.846590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.846998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.191 [2024-04-18 15:15:44.847333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847470] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847669] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc82620 is same with the state(5) to be set 00:28:29.192 [2024-04-18 15:15:44.847880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.847917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.847940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.847950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.847961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.847970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.847981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.847990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.192 [2024-04-18 15:15:44.848319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.192 [2024-04-18 15:15:44.848330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.848982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.848992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.849001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.849012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.849021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.849031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.849040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.849051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.849059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.849070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.849079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.849090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.849099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.849109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.849118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.193 [2024-04-18 15:15:44.849128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.193 [2024-04-18 15:15:44.849137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.194 [2024-04-18 15:15:44.849925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.194 [2024-04-18 15:15:44.849934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.849945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.849954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.849964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.849973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.849992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.195 [2024-04-18 15:15:44.850502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x549f70 is same with the state(5) to be set 00:28:29.195 [2024-04-18 15:15:44.850525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.195 [2024-04-18 15:15:44.850533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.195 [2024-04-18 15:15:44.850541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15000 len:8 PRP1 0x0 PRP2 0x0 00:28:29.195 [2024-04-18 15:15:44.850550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850618] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x549f70 was disconnected and freed. reset controller. 00:28:29.195 [2024-04-18 15:15:44.850704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.195 [2024-04-18 15:15:44.850716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.195 [2024-04-18 15:15:44.850729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.195 [2024-04-18 15:15:44.850738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.196 [2024-04-18 15:15:44.850765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.196 [2024-04-18 15:15:44.850775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.196 [2024-04-18 15:15:44.850785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.196 [2024-04-18 15:15:44.850794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.196 [2024-04-18 15:15:44.850804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e0dc0 is same with the state(5) to be set 00:28:29.196 [2024-04-18 15:15:44.851036] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.196 [2024-04-18 15:15:44.851070] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e0dc0 (9): Bad file descriptor 00:28:29.196 [2024-04-18 15:15:44.851166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.196 [2024-04-18 15:15:44.851203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.196 [2024-04-18 15:15:44.851215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e0dc0 with addr=10.0.0.2, port=4420 00:28:29.196 [2024-04-18 15:15:44.851225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e0dc0 is same with the state(5) to be set 00:28:29.196 [2024-04-18 15:15:44.851240] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e0dc0 (9): Bad file descriptor 00:28:29.196 [2024-04-18 15:15:44.851254] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.196 [2024-04-18 15:15:44.851263] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.196 [2024-04-18 15:15:44.851273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.196 15:15:44 -- host/timeout.sh@128 -- # wait 88924 00:28:29.196 [2024-04-18 15:15:44.875785] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.196 [2024-04-18 15:15:44.875845] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.730 [2024-04-18 15:15:46.872824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.730 [2024-04-18 15:15:46.872924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.730 [2024-04-18 15:15:46.872938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e0dc0 with addr=10.0.0.2, port=4420 00:28:31.730 [2024-04-18 15:15:46.872953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e0dc0 is same with the state(5) to be set 00:28:31.730 [2024-04-18 15:15:46.872981] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e0dc0 (9): Bad file descriptor 00:28:31.730 [2024-04-18 15:15:46.872999] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:31.730 [2024-04-18 15:15:46.873008] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:31.730 [2024-04-18 15:15:46.873019] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.730 [2024-04-18 15:15:46.873046] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:31.730 [2024-04-18 15:15:46.873057] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.633 [2024-04-18 15:15:48.870058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-04-18 15:15:48.870154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.633 [2024-04-18 15:15:48.870170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e0dc0 with addr=10.0.0.2, port=4420 00:28:33.633 [2024-04-18 15:15:48.870186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e0dc0 is same with the state(5) to be set 00:28:33.633 [2024-04-18 15:15:48.870218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e0dc0 (9): Bad file descriptor 00:28:33.633 [2024-04-18 15:15:48.870237] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.633 [2024-04-18 15:15:48.870246] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.633 [2024-04-18 15:15:48.870257] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.633 [2024-04-18 15:15:48.870283] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.633 [2024-04-18 15:15:48.870294] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:35.542 [2024-04-18 15:15:50.867158] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:36.478 00:28:36.478 Latency(us) 00:28:36.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.478 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:28:36.478 NVMe0n1 : 8.11 2864.82 11.19 15.79 0.00 44424.22 1908.18 7061253.96 00:28:36.478 =================================================================================================================== 00:28:36.478 Total : 2864.82 11.19 15.79 0.00 44424.22 1908.18 7061253.96 00:28:36.478 0 00:28:36.478 15:15:51 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:36.478 Attaching 5 probes... 00:28:36.478 1150.027918: reset bdev controller NVMe0 00:28:36.478 1150.101311: reconnect bdev controller NVMe0 00:28:36.478 3171.667418: reconnect delay bdev controller NVMe0 00:28:36.478 3171.690006: reconnect bdev controller NVMe0 00:28:36.478 5168.869848: reconnect delay bdev controller NVMe0 00:28:36.478 5168.893845: reconnect bdev controller NVMe0 00:28:36.478 7166.097182: reconnect delay bdev controller NVMe0 00:28:36.478 7166.121672: reconnect bdev controller NVMe0 00:28:36.478 15:15:51 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:28:36.478 15:15:51 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:28:36.478 15:15:51 -- host/timeout.sh@136 -- # kill 88872 00:28:36.478 15:15:51 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:36.478 15:15:51 -- host/timeout.sh@139 -- # killprocess 88844 00:28:36.478 15:15:51 -- common/autotest_common.sh@936 -- # '[' -z 88844 ']' 00:28:36.478 15:15:51 -- common/autotest_common.sh@940 -- # kill -0 88844 00:28:36.478 15:15:51 -- common/autotest_common.sh@941 -- # uname 00:28:36.478 15:15:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:36.478 15:15:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88844 00:28:36.478 killing process with pid 88844 00:28:36.478 Received shutdown signal, test time was about 8.192521 seconds 00:28:36.478 00:28:36.478 Latency(us) 00:28:36.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.478 =================================================================================================================== 00:28:36.478 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:36.478 15:15:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:28:36.478 15:15:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:28:36.478 15:15:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88844' 00:28:36.478 15:15:51 -- common/autotest_common.sh@955 -- # kill 88844 00:28:36.478 15:15:51 -- common/autotest_common.sh@960 -- # wait 88844 00:28:36.478 15:15:52 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:36.736 15:15:52 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:28:36.736 15:15:52 -- host/timeout.sh@145 -- # nvmftestfini 00:28:36.736 15:15:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:36.736 15:15:52 -- nvmf/common.sh@117 -- # sync 00:28:36.736 15:15:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:36.736 15:15:52 -- nvmf/common.sh@120 -- # set +e 00:28:36.736 15:15:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:36.736 15:15:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:36.736 rmmod nvme_tcp 00:28:36.736 rmmod nvme_fabrics 00:28:36.736 rmmod nvme_keyring 00:28:37.018 15:15:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:37.018 15:15:52 -- nvmf/common.sh@124 -- # set -e 00:28:37.018 15:15:52 -- nvmf/common.sh@125 -- # return 0 00:28:37.018 15:15:52 -- nvmf/common.sh@478 -- # '[' -n 88260 ']' 00:28:37.018 15:15:52 -- nvmf/common.sh@479 -- # killprocess 88260 00:28:37.018 15:15:52 -- common/autotest_common.sh@936 -- # '[' -z 88260 ']' 00:28:37.018 15:15:52 -- common/autotest_common.sh@940 -- # kill -0 88260 00:28:37.018 15:15:52 -- common/autotest_common.sh@941 -- # uname 00:28:37.018 15:15:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:37.018 15:15:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88260 00:28:37.018 killing process with pid 88260 00:28:37.018 15:15:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:37.018 15:15:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:37.018 15:15:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88260' 00:28:37.018 15:15:52 -- common/autotest_common.sh@955 -- # kill 88260 00:28:37.018 15:15:52 -- common/autotest_common.sh@960 -- # wait 88260 00:28:37.278 15:15:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:37.278 15:15:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:37.278 15:15:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:37.278 15:15:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:37.278 15:15:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:37.278 15:15:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.278 15:15:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:37.278 15:15:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.278 15:15:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:37.278 00:28:37.278 real 0m46.741s 00:28:37.278 user 2m15.305s 00:28:37.278 sys 0m6.411s 00:28:37.278 15:15:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:37.278 ************************************ 00:28:37.278 END TEST nvmf_timeout 00:28:37.278 ************************************ 00:28:37.278 15:15:52 -- common/autotest_common.sh@10 -- # set +x 00:28:37.278 15:15:52 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:28:37.278 15:15:52 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:28:37.278 15:15:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:37.278 15:15:52 -- common/autotest_common.sh@10 -- # set +x 00:28:37.278 15:15:52 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:28:37.278 00:28:37.278 real 11m55.340s 00:28:37.278 user 30m40.271s 00:28:37.278 sys 3m21.862s 00:28:37.278 15:15:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:37.278 15:15:52 -- common/autotest_common.sh@10 -- # set +x 00:28:37.278 ************************************ 00:28:37.278 END TEST nvmf_tcp 00:28:37.278 ************************************ 00:28:37.278 15:15:52 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:28:37.278 15:15:52 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:37.278 15:15:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:37.278 15:15:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:37.278 15:15:52 -- common/autotest_common.sh@10 -- # set +x 00:28:37.538 ************************************ 00:28:37.538 START TEST spdkcli_nvmf_tcp 00:28:37.538 ************************************ 00:28:37.538 15:15:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:37.538 * Looking for test storage... 00:28:37.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:37.538 15:15:53 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:28:37.538 15:15:53 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:28:37.538 15:15:53 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:28:37.538 15:15:53 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:37.538 15:15:53 -- nvmf/common.sh@7 -- # uname -s 00:28:37.538 15:15:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.538 15:15:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.538 15:15:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.538 15:15:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.538 15:15:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.538 15:15:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.538 15:15:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.538 15:15:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.538 15:15:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.538 15:15:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.538 15:15:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:28:37.538 15:15:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:28:37.538 15:15:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.538 15:15:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.538 15:15:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:37.538 15:15:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.538 15:15:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:37.538 15:15:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.797 15:15:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.797 15:15:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.797 15:15:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.797 15:15:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.797 15:15:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.797 15:15:53 -- paths/export.sh@5 -- # export PATH 00:28:37.797 15:15:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.797 15:15:53 -- nvmf/common.sh@47 -- # : 0 00:28:37.797 15:15:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:37.797 15:15:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:37.797 15:15:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.797 15:15:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.797 15:15:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.797 15:15:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:37.797 15:15:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:37.797 15:15:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:37.797 15:15:53 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:37.797 15:15:53 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:37.797 15:15:53 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:37.797 15:15:53 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:37.797 15:15:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:37.797 15:15:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.797 15:15:53 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:37.797 15:15:53 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=89152 00:28:37.797 15:15:53 -- spdkcli/common.sh@34 -- # waitforlisten 89152 00:28:37.797 15:15:53 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:37.797 15:15:53 -- common/autotest_common.sh@817 -- # '[' -z 89152 ']' 00:28:37.797 15:15:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.797 15:15:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:37.797 15:15:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.797 15:15:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:37.797 15:15:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.797 [2024-04-18 15:15:53.315464] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:28:37.797 [2024-04-18 15:15:53.315562] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89152 ] 00:28:37.797 [2024-04-18 15:15:53.447184] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:38.055 [2024-04-18 15:15:53.546630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.055 [2024-04-18 15:15:53.546630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.623 15:15:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:38.623 15:15:54 -- common/autotest_common.sh@850 -- # return 0 00:28:38.623 15:15:54 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:38.623 15:15:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:38.623 15:15:54 -- common/autotest_common.sh@10 -- # set +x 00:28:38.623 15:15:54 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:38.623 15:15:54 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:38.623 15:15:54 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:38.623 15:15:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:38.623 15:15:54 -- common/autotest_common.sh@10 -- # set +x 00:28:38.623 15:15:54 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:38.623 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:38.623 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:38.623 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:38.623 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:38.623 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:38.623 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:38.623 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:38.623 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:38.623 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:38.623 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:38.623 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:38.623 ' 00:28:39.189 [2024-04-18 15:15:54.682125] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:41.768 [2024-04-18 15:15:56.971794] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.706 [2024-04-18 15:15:58.286872] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:28:45.256 [2024-04-18 15:16:00.728992] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:28:47.160 [2024-04-18 15:16:02.867008] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:28:49.060 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:49.060 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:49.060 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:49.060 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:49.060 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:49.060 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:49.060 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:49.060 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:49.060 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:49.060 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:49.060 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:49.060 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:49.060 15:16:04 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:49.060 15:16:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:49.060 15:16:04 -- common/autotest_common.sh@10 -- # set +x 00:28:49.060 15:16:04 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:49.060 15:16:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:49.060 15:16:04 -- common/autotest_common.sh@10 -- # set +x 00:28:49.060 15:16:04 -- spdkcli/nvmf.sh@69 -- # check_match 00:28:49.060 15:16:04 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:28:49.627 15:16:05 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:49.627 15:16:05 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:49.627 15:16:05 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:49.627 15:16:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:49.627 15:16:05 -- common/autotest_common.sh@10 -- # set +x 00:28:49.627 15:16:05 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:49.627 15:16:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:49.627 15:16:05 -- common/autotest_common.sh@10 -- # set +x 00:28:49.627 15:16:05 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:49.627 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:49.627 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:49.627 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:49.627 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:28:49.627 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:28:49.627 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:49.627 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:49.627 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:49.627 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:49.627 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:49.627 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:49.627 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:49.627 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:49.627 ' 00:28:56.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:56.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:56.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:56.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:56.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:56.198 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:56.198 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:56.198 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:56.199 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:56.199 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:56.199 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:56.199 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:56.199 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:56.199 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:56.199 15:16:10 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:56.199 15:16:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:56.199 15:16:10 -- common/autotest_common.sh@10 -- # set +x 00:28:56.199 15:16:10 -- spdkcli/nvmf.sh@90 -- # killprocess 89152 00:28:56.199 15:16:10 -- common/autotest_common.sh@936 -- # '[' -z 89152 ']' 00:28:56.199 15:16:10 -- common/autotest_common.sh@940 -- # kill -0 89152 00:28:56.199 15:16:10 -- common/autotest_common.sh@941 -- # uname 00:28:56.199 15:16:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:56.199 15:16:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89152 00:28:56.199 15:16:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:56.199 15:16:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:56.199 killing process with pid 89152 00:28:56.199 15:16:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89152' 00:28:56.199 15:16:10 -- common/autotest_common.sh@955 -- # kill 89152 00:28:56.199 [2024-04-18 15:16:10.925323] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:56.199 15:16:10 -- common/autotest_common.sh@960 -- # wait 89152 00:28:56.199 15:16:11 -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:56.199 15:16:11 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:56.199 15:16:11 -- spdkcli/common.sh@13 -- # '[' -n 89152 ']' 00:28:56.199 15:16:11 -- spdkcli/common.sh@14 -- # killprocess 89152 00:28:56.199 15:16:11 -- common/autotest_common.sh@936 -- # '[' -z 89152 ']' 00:28:56.199 15:16:11 -- common/autotest_common.sh@940 -- # kill -0 89152 00:28:56.199 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89152) - No such process 00:28:56.199 Process with pid 89152 is not found 00:28:56.199 15:16:11 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89152 is not found' 00:28:56.199 15:16:11 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:56.199 15:16:11 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:56.199 15:16:11 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:56.199 00:28:56.199 real 0m18.067s 00:28:56.199 user 0m39.402s 00:28:56.199 sys 0m1.179s 00:28:56.199 15:16:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:56.199 ************************************ 00:28:56.199 END TEST spdkcli_nvmf_tcp 00:28:56.199 15:16:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.199 ************************************ 00:28:56.199 15:16:11 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:56.199 15:16:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:56.199 15:16:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:56.199 15:16:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.199 ************************************ 00:28:56.199 START TEST nvmf_identify_passthru 00:28:56.199 ************************************ 00:28:56.199 15:16:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:56.199 * Looking for test storage... 00:28:56.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:56.199 15:16:11 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:56.199 15:16:11 -- nvmf/common.sh@7 -- # uname -s 00:28:56.199 15:16:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.199 15:16:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.199 15:16:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.199 15:16:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.199 15:16:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.199 15:16:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.199 15:16:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.199 15:16:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.199 15:16:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.199 15:16:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.199 15:16:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:28:56.199 15:16:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:28:56.199 15:16:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.199 15:16:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.199 15:16:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:56.199 15:16:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.199 15:16:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:56.199 15:16:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.199 15:16:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.199 15:16:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.199 15:16:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.199 15:16:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.199 15:16:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.199 15:16:11 -- paths/export.sh@5 -- # export PATH 00:28:56.199 15:16:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.199 15:16:11 -- nvmf/common.sh@47 -- # : 0 00:28:56.199 15:16:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:56.199 15:16:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:56.199 15:16:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.199 15:16:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.199 15:16:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.199 15:16:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:56.199 15:16:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:56.199 15:16:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:56.199 15:16:11 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:56.199 15:16:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.199 15:16:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.199 15:16:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.199 15:16:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.199 15:16:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.199 15:16:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.199 15:16:11 -- paths/export.sh@5 -- # export PATH 00:28:56.199 15:16:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.199 15:16:11 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:56.199 15:16:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:56.199 15:16:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.199 15:16:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:56.200 15:16:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:56.200 15:16:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:56.200 15:16:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.200 15:16:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:56.200 15:16:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.200 15:16:11 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:28:56.200 15:16:11 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:28:56.200 15:16:11 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:28:56.200 15:16:11 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:28:56.200 15:16:11 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:28:56.200 15:16:11 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:28:56.200 15:16:11 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.200 15:16:11 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.200 15:16:11 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:56.200 15:16:11 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:56.200 15:16:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:56.200 15:16:11 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:56.200 15:16:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:56.200 15:16:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.200 15:16:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:56.200 15:16:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:56.200 15:16:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:56.200 15:16:11 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:56.200 15:16:11 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:56.200 15:16:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:56.200 Cannot find device "nvmf_tgt_br" 00:28:56.200 15:16:11 -- nvmf/common.sh@155 -- # true 00:28:56.200 15:16:11 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:56.200 Cannot find device "nvmf_tgt_br2" 00:28:56.200 15:16:11 -- nvmf/common.sh@156 -- # true 00:28:56.200 15:16:11 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:56.200 15:16:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:56.200 Cannot find device "nvmf_tgt_br" 00:28:56.200 15:16:11 -- nvmf/common.sh@158 -- # true 00:28:56.200 15:16:11 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:56.200 Cannot find device "nvmf_tgt_br2" 00:28:56.200 15:16:11 -- nvmf/common.sh@159 -- # true 00:28:56.200 15:16:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:56.200 15:16:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:56.200 15:16:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:56.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:56.200 15:16:11 -- nvmf/common.sh@162 -- # true 00:28:56.200 15:16:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:56.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:56.200 15:16:11 -- nvmf/common.sh@163 -- # true 00:28:56.200 15:16:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:56.200 15:16:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:56.200 15:16:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:56.200 15:16:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:56.200 15:16:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:56.200 15:16:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:56.200 15:16:11 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:56.200 15:16:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:56.200 15:16:11 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:56.200 15:16:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:56.200 15:16:11 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:56.200 15:16:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:56.200 15:16:11 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:56.200 15:16:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:56.200 15:16:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:56.200 15:16:11 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:56.200 15:16:11 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:56.200 15:16:11 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:56.200 15:16:11 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:56.200 15:16:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:56.200 15:16:11 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:56.200 15:16:11 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:56.200 15:16:11 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:56.200 15:16:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:56.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:28:56.200 00:28:56.200 --- 10.0.0.2 ping statistics --- 00:28:56.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.200 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:28:56.200 15:16:11 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:56.200 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:56.200 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:28:56.200 00:28:56.200 --- 10.0.0.3 ping statistics --- 00:28:56.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.200 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:28:56.200 15:16:11 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:56.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:28:56.460 00:28:56.460 --- 10.0.0.1 ping statistics --- 00:28:56.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.460 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:28:56.460 15:16:11 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.460 15:16:11 -- nvmf/common.sh@422 -- # return 0 00:28:56.460 15:16:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:56.460 15:16:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.460 15:16:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:56.460 15:16:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:56.460 15:16:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.460 15:16:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:56.460 15:16:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:56.460 15:16:11 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:56.460 15:16:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:56.460 15:16:11 -- common/autotest_common.sh@10 -- # set +x 00:28:56.460 15:16:11 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:56.460 15:16:11 -- common/autotest_common.sh@1510 -- # bdfs=() 00:28:56.460 15:16:11 -- common/autotest_common.sh@1510 -- # local bdfs 00:28:56.460 15:16:11 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:28:56.460 15:16:11 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:28:56.460 15:16:11 -- common/autotest_common.sh@1499 -- # bdfs=() 00:28:56.460 15:16:11 -- common/autotest_common.sh@1499 -- # local bdfs 00:28:56.460 15:16:11 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:56.460 15:16:11 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:56.460 15:16:11 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:28:56.460 15:16:12 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:28:56.460 15:16:12 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:56.460 15:16:12 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:28:56.460 15:16:12 -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:28:56.460 15:16:12 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:28:56.460 15:16:12 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:56.460 15:16:12 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:28:56.460 15:16:12 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:56.719 15:16:12 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:28:56.719 15:16:12 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:28:56.719 15:16:12 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:56.719 15:16:12 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:56.719 15:16:12 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:28:56.719 15:16:12 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:56.719 15:16:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:56.719 15:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:56.979 15:16:12 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:56.979 15:16:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:56.979 15:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:56.979 15:16:12 -- target/identify_passthru.sh@31 -- # nvmfpid=89655 00:28:56.979 15:16:12 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:56.979 15:16:12 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:56.979 15:16:12 -- target/identify_passthru.sh@35 -- # waitforlisten 89655 00:28:56.979 15:16:12 -- common/autotest_common.sh@817 -- # '[' -z 89655 ']' 00:28:56.979 15:16:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.979 15:16:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:56.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.979 15:16:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.979 15:16:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:56.979 15:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:56.979 [2024-04-18 15:16:12.522934] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:28:56.979 [2024-04-18 15:16:12.523010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.979 [2024-04-18 15:16:12.664016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:57.239 [2024-04-18 15:16:12.747509] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.239 [2024-04-18 15:16:12.747585] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.239 [2024-04-18 15:16:12.747595] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.239 [2024-04-18 15:16:12.747604] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.239 [2024-04-18 15:16:12.747611] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.239 [2024-04-18 15:16:12.747717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.239 [2024-04-18 15:16:12.747919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.239 [2024-04-18 15:16:12.748759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.239 [2024-04-18 15:16:12.748760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:57.807 15:16:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:57.807 15:16:13 -- common/autotest_common.sh@850 -- # return 0 00:28:57.807 15:16:13 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:57.807 15:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.807 15:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:57.807 15:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.807 15:16:13 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:57.807 15:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.807 15:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:57.807 [2024-04-18 15:16:13.461069] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:57.807 15:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.807 15:16:13 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:57.807 15:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.807 15:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:57.807 [2024-04-18 15:16:13.474454] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.807 15:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.807 15:16:13 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:57.807 15:16:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:57.807 15:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:58.067 15:16:13 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:28:58.067 15:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.067 15:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:58.067 Nvme0n1 00:28:58.067 15:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.067 15:16:13 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:58.067 15:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.067 15:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:58.067 15:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.067 15:16:13 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:58.067 15:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.067 15:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:58.067 15:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.067 15:16:13 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:58.067 15:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.067 15:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:58.067 [2024-04-18 15:16:13.643062] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.067 15:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.067 15:16:13 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:58.067 15:16:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.067 15:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:58.067 [2024-04-18 15:16:13.654813] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:58.067 [ 00:28:58.067 { 00:28:58.067 "allow_any_host": true, 00:28:58.067 "hosts": [], 00:28:58.067 "listen_addresses": [], 00:28:58.067 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:58.067 "subtype": "Discovery" 00:28:58.067 }, 00:28:58.067 { 00:28:58.067 "allow_any_host": true, 00:28:58.067 "hosts": [], 00:28:58.067 "listen_addresses": [ 00:28:58.067 { 00:28:58.067 "adrfam": "IPv4", 00:28:58.067 "traddr": "10.0.0.2", 00:28:58.067 "transport": "TCP", 00:28:58.067 "trsvcid": "4420", 00:28:58.067 "trtype": "TCP" 00:28:58.067 } 00:28:58.067 ], 00:28:58.067 "max_cntlid": 65519, 00:28:58.067 "max_namespaces": 1, 00:28:58.067 "min_cntlid": 1, 00:28:58.067 "model_number": "SPDK bdev Controller", 00:28:58.067 "namespaces": [ 00:28:58.067 { 00:28:58.067 "bdev_name": "Nvme0n1", 00:28:58.067 "name": "Nvme0n1", 00:28:58.067 "nguid": "7B3E6BA156FD443F8735931381642C6E", 00:28:58.067 "nsid": 1, 00:28:58.067 "uuid": "7b3e6ba1-56fd-443f-8735-931381642c6e" 00:28:58.067 } 00:28:58.067 ], 00:28:58.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:58.067 "serial_number": "SPDK00000000000001", 00:28:58.067 "subtype": "NVMe" 00:28:58.067 } 00:28:58.067 ] 00:28:58.067 15:16:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.067 15:16:13 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:58.067 15:16:13 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:58.067 15:16:13 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:58.327 15:16:13 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:28:58.327 15:16:13 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:58.327 15:16:13 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:58.327 15:16:13 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:58.586 15:16:14 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:28:58.586 15:16:14 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:28:58.586 15:16:14 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:28:58.586 15:16:14 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:58.586 15:16:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.586 15:16:14 -- common/autotest_common.sh@10 -- # set +x 00:28:58.586 15:16:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.586 15:16:14 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:58.586 15:16:14 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:58.586 15:16:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:58.586 15:16:14 -- nvmf/common.sh@117 -- # sync 00:28:58.586 15:16:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:58.586 15:16:14 -- nvmf/common.sh@120 -- # set +e 00:28:58.586 15:16:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:58.586 15:16:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:58.586 rmmod nvme_tcp 00:28:58.586 rmmod nvme_fabrics 00:28:58.586 rmmod nvme_keyring 00:28:58.586 15:16:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:58.586 15:16:14 -- nvmf/common.sh@124 -- # set -e 00:28:58.586 15:16:14 -- nvmf/common.sh@125 -- # return 0 00:28:58.586 15:16:14 -- nvmf/common.sh@478 -- # '[' -n 89655 ']' 00:28:58.586 15:16:14 -- nvmf/common.sh@479 -- # killprocess 89655 00:28:58.586 15:16:14 -- common/autotest_common.sh@936 -- # '[' -z 89655 ']' 00:28:58.586 15:16:14 -- common/autotest_common.sh@940 -- # kill -0 89655 00:28:58.586 15:16:14 -- common/autotest_common.sh@941 -- # uname 00:28:58.586 15:16:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:58.586 15:16:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89655 00:28:58.586 15:16:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:58.586 15:16:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:58.586 15:16:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89655' 00:28:58.586 killing process with pid 89655 00:28:58.586 15:16:14 -- common/autotest_common.sh@955 -- # kill 89655 00:28:58.586 [2024-04-18 15:16:14.243181] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:58.586 15:16:14 -- common/autotest_common.sh@960 -- # wait 89655 00:28:58.843 15:16:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:58.843 15:16:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:58.843 15:16:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:58.843 15:16:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:58.843 15:16:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:58.843 15:16:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.843 15:16:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:58.843 15:16:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.843 15:16:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:58.843 00:28:58.843 real 0m3.193s 00:28:58.843 user 0m7.192s 00:28:58.843 sys 0m0.971s 00:28:58.843 15:16:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:58.843 15:16:14 -- common/autotest_common.sh@10 -- # set +x 00:28:58.843 ************************************ 00:28:58.843 END TEST nvmf_identify_passthru 00:28:58.843 ************************************ 00:28:58.843 15:16:14 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:28:58.843 15:16:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:58.843 15:16:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:58.843 15:16:14 -- common/autotest_common.sh@10 -- # set +x 00:28:59.102 ************************************ 00:28:59.102 START TEST nvmf_dif 00:28:59.102 ************************************ 00:28:59.102 15:16:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:28:59.102 * Looking for test storage... 00:28:59.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:59.102 15:16:14 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:59.102 15:16:14 -- nvmf/common.sh@7 -- # uname -s 00:28:59.102 15:16:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.102 15:16:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.102 15:16:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.102 15:16:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.102 15:16:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.102 15:16:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.102 15:16:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.102 15:16:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.102 15:16:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.102 15:16:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.102 15:16:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:28:59.102 15:16:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:28:59.102 15:16:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.102 15:16:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.102 15:16:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:59.102 15:16:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.102 15:16:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:59.102 15:16:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.102 15:16:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.102 15:16:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.102 15:16:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.102 15:16:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.102 15:16:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.102 15:16:14 -- paths/export.sh@5 -- # export PATH 00:28:59.102 15:16:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.102 15:16:14 -- nvmf/common.sh@47 -- # : 0 00:28:59.102 15:16:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:59.102 15:16:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:59.102 15:16:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.102 15:16:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.102 15:16:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.102 15:16:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:59.102 15:16:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:59.102 15:16:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:59.102 15:16:14 -- target/dif.sh@15 -- # NULL_META=16 00:28:59.102 15:16:14 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:59.102 15:16:14 -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:59.102 15:16:14 -- target/dif.sh@15 -- # NULL_DIF=1 00:28:59.102 15:16:14 -- target/dif.sh@135 -- # nvmftestinit 00:28:59.102 15:16:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:59.102 15:16:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.102 15:16:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:59.102 15:16:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:59.102 15:16:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:59.102 15:16:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.102 15:16:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:59.102 15:16:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.102 15:16:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:28:59.102 15:16:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:28:59.102 15:16:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:28:59.102 15:16:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:28:59.102 15:16:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:28:59.102 15:16:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:28:59.102 15:16:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.102 15:16:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.102 15:16:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:59.102 15:16:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:59.102 15:16:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:59.102 15:16:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:59.102 15:16:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:59.102 15:16:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.102 15:16:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:59.102 15:16:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:59.102 15:16:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:59.102 15:16:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:59.102 15:16:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:59.102 15:16:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:59.102 Cannot find device "nvmf_tgt_br" 00:28:59.102 15:16:14 -- nvmf/common.sh@155 -- # true 00:28:59.102 15:16:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:59.102 Cannot find device "nvmf_tgt_br2" 00:28:59.102 15:16:14 -- nvmf/common.sh@156 -- # true 00:28:59.103 15:16:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:59.103 15:16:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:59.103 Cannot find device "nvmf_tgt_br" 00:28:59.103 15:16:14 -- nvmf/common.sh@158 -- # true 00:28:59.103 15:16:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:59.103 Cannot find device "nvmf_tgt_br2" 00:28:59.103 15:16:14 -- nvmf/common.sh@159 -- # true 00:28:59.103 15:16:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:59.103 15:16:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:59.103 15:16:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:59.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:59.103 15:16:14 -- nvmf/common.sh@162 -- # true 00:28:59.103 15:16:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:59.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:59.103 15:16:14 -- nvmf/common.sh@163 -- # true 00:28:59.103 15:16:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:59.103 15:16:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:59.103 15:16:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:59.103 15:16:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:59.103 15:16:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:59.360 15:16:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:59.360 15:16:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:59.360 15:16:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:59.360 15:16:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:59.360 15:16:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:59.360 15:16:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:59.360 15:16:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:59.360 15:16:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:59.360 15:16:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:59.360 15:16:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:59.360 15:16:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:59.360 15:16:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:59.360 15:16:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:59.360 15:16:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:59.360 15:16:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:59.360 15:16:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:59.360 15:16:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:59.360 15:16:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:59.360 15:16:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:59.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:28:59.360 00:28:59.360 --- 10.0.0.2 ping statistics --- 00:28:59.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.360 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:28:59.360 15:16:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:59.360 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:59.360 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:28:59.360 00:28:59.360 --- 10.0.0.3 ping statistics --- 00:28:59.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.360 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:28:59.360 15:16:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:59.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:28:59.360 00:28:59.360 --- 10.0.0.1 ping statistics --- 00:28:59.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.360 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:28:59.360 15:16:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.360 15:16:14 -- nvmf/common.sh@422 -- # return 0 00:28:59.360 15:16:14 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:28:59.360 15:16:14 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:59.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:59.618 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:59.618 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:59.618 15:16:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.618 15:16:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:59.618 15:16:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:59.618 15:16:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.618 15:16:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:59.618 15:16:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:59.618 15:16:15 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:59.618 15:16:15 -- target/dif.sh@137 -- # nvmfappstart 00:28:59.618 15:16:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:59.618 15:16:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:59.618 15:16:15 -- common/autotest_common.sh@10 -- # set +x 00:28:59.618 15:16:15 -- nvmf/common.sh@470 -- # nvmfpid=90001 00:28:59.618 15:16:15 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:59.618 15:16:15 -- nvmf/common.sh@471 -- # waitforlisten 90001 00:28:59.618 15:16:15 -- common/autotest_common.sh@817 -- # '[' -z 90001 ']' 00:28:59.618 15:16:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.618 15:16:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:59.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.618 15:16:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.618 15:16:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:59.878 15:16:15 -- common/autotest_common.sh@10 -- # set +x 00:28:59.878 [2024-04-18 15:16:15.379593] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:28:59.878 [2024-04-18 15:16:15.379670] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.878 [2024-04-18 15:16:15.523577] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.138 [2024-04-18 15:16:15.617170] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.138 [2024-04-18 15:16:15.617222] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.138 [2024-04-18 15:16:15.617232] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.138 [2024-04-18 15:16:15.617240] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.138 [2024-04-18 15:16:15.617248] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.138 [2024-04-18 15:16:15.617279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.726 15:16:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:00.726 15:16:16 -- common/autotest_common.sh@850 -- # return 0 00:29:00.726 15:16:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:00.726 15:16:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:00.726 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:29:00.726 15:16:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.726 15:16:16 -- target/dif.sh@139 -- # create_transport 00:29:00.726 15:16:16 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:00.726 15:16:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.726 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:29:00.726 [2024-04-18 15:16:16.324282] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.726 15:16:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.726 15:16:16 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:00.726 15:16:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:00.726 15:16:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:00.726 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:29:00.726 ************************************ 00:29:00.726 START TEST fio_dif_1_default 00:29:00.726 ************************************ 00:29:00.726 15:16:16 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:29:00.726 15:16:16 -- target/dif.sh@86 -- # create_subsystems 0 00:29:00.726 15:16:16 -- target/dif.sh@28 -- # local sub 00:29:00.726 15:16:16 -- target/dif.sh@30 -- # for sub in "$@" 00:29:00.726 15:16:16 -- target/dif.sh@31 -- # create_subsystem 0 00:29:00.726 15:16:16 -- target/dif.sh@18 -- # local sub_id=0 00:29:00.726 15:16:16 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:00.726 15:16:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.726 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:29:00.986 bdev_null0 00:29:00.986 15:16:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.986 15:16:16 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:00.986 15:16:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.986 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:29:00.986 15:16:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.986 15:16:16 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:00.986 15:16:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.986 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:29:00.986 15:16:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.987 15:16:16 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:00.987 15:16:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:00.987 15:16:16 -- common/autotest_common.sh@10 -- # set +x 00:29:00.987 [2024-04-18 15:16:16.472169] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.987 15:16:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:00.987 15:16:16 -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:00.987 15:16:16 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:00.987 15:16:16 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:00.987 15:16:16 -- nvmf/common.sh@521 -- # config=() 00:29:00.987 15:16:16 -- nvmf/common.sh@521 -- # local subsystem config 00:29:00.987 15:16:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:00.987 15:16:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:00.987 { 00:29:00.987 "params": { 00:29:00.987 "name": "Nvme$subsystem", 00:29:00.987 "trtype": "$TEST_TRANSPORT", 00:29:00.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.987 "adrfam": "ipv4", 00:29:00.987 "trsvcid": "$NVMF_PORT", 00:29:00.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.987 "hdgst": ${hdgst:-false}, 00:29:00.987 "ddgst": ${ddgst:-false} 00:29:00.987 }, 00:29:00.987 "method": "bdev_nvme_attach_controller" 00:29:00.987 } 00:29:00.987 EOF 00:29:00.987 )") 00:29:00.987 15:16:16 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:00.987 15:16:16 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:00.987 15:16:16 -- target/dif.sh@82 -- # gen_fio_conf 00:29:00.987 15:16:16 -- target/dif.sh@54 -- # local file 00:29:00.987 15:16:16 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:00.987 15:16:16 -- target/dif.sh@56 -- # cat 00:29:00.987 15:16:16 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:00.987 15:16:16 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:00.987 15:16:16 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:00.987 15:16:16 -- common/autotest_common.sh@1327 -- # shift 00:29:00.987 15:16:16 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:00.987 15:16:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:00.987 15:16:16 -- nvmf/common.sh@543 -- # cat 00:29:00.987 15:16:16 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:00.987 15:16:16 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:00.987 15:16:16 -- target/dif.sh@72 -- # (( file <= files )) 00:29:00.987 15:16:16 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:00.987 15:16:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:00.987 15:16:16 -- nvmf/common.sh@545 -- # jq . 00:29:00.987 15:16:16 -- nvmf/common.sh@546 -- # IFS=, 00:29:00.987 15:16:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:00.987 "params": { 00:29:00.987 "name": "Nvme0", 00:29:00.987 "trtype": "tcp", 00:29:00.987 "traddr": "10.0.0.2", 00:29:00.987 "adrfam": "ipv4", 00:29:00.987 "trsvcid": "4420", 00:29:00.987 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:00.987 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:00.987 "hdgst": false, 00:29:00.987 "ddgst": false 00:29:00.987 }, 00:29:00.987 "method": "bdev_nvme_attach_controller" 00:29:00.987 }' 00:29:00.987 15:16:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:00.987 15:16:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:00.987 15:16:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:00.987 15:16:16 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:00.987 15:16:16 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:29:00.987 15:16:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:00.987 15:16:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:00.987 15:16:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:00.987 15:16:16 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:00.987 15:16:16 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:01.246 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:01.246 fio-3.35 00:29:01.246 Starting 1 thread 00:29:13.503 00:29:13.503 filename0: (groupid=0, jobs=1): err= 0: pid=90095: Thu Apr 18 15:16:27 2024 00:29:13.503 read: IOPS=910, BW=3644KiB/s (3731kB/s)(35.7MiB/10025msec) 00:29:13.503 slat (nsec): min=5583, max=50695, avg=6420.78, stdev=2476.56 00:29:13.503 clat (usec): min=328, max=41634, avg=4372.62, stdev=12075.51 00:29:13.503 lat (usec): min=334, max=41640, avg=4379.04, stdev=12075.48 00:29:13.503 clat percentiles (usec): 00:29:13.503 | 1.00th=[ 334], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 347], 00:29:13.503 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 375], 00:29:13.503 | 70.00th=[ 383], 80.00th=[ 400], 90.00th=[ 848], 95.00th=[40633], 00:29:13.503 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:29:13.503 | 99.99th=[41681] 00:29:13.503 bw ( KiB/s): min= 2400, max= 4512, per=100.00%, avg=3651.20, stdev=597.12, samples=20 00:29:13.503 iops : min= 600, max= 1128, avg=912.80, stdev=149.28, samples=20 00:29:13.503 lat (usec) : 500=89.54%, 750=0.43%, 1000=0.04% 00:29:13.503 lat (msec) : 2=0.04%, 4=0.04%, 50=9.90% 00:29:13.503 cpu : usr=85.27%, sys=14.15%, ctx=34, majf=0, minf=0 00:29:13.503 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:13.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.503 issued rwts: total=9132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:13.503 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:13.503 00:29:13.503 Run status group 0 (all jobs): 00:29:13.503 READ: bw=3644KiB/s (3731kB/s), 3644KiB/s-3644KiB/s (3731kB/s-3731kB/s), io=35.7MiB (37.4MB), run=10025-10025msec 00:29:13.503 15:16:27 -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:13.503 15:16:27 -- target/dif.sh@43 -- # local sub 00:29:13.503 15:16:27 -- target/dif.sh@45 -- # for sub in "$@" 00:29:13.503 15:16:27 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:13.503 15:16:27 -- target/dif.sh@36 -- # local sub_id=0 00:29:13.503 15:16:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:13.503 15:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.503 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.503 15:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.503 15:16:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:13.503 15:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.503 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.503 15:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.503 00:29:13.503 real 0m11.094s 00:29:13.503 user 0m9.229s 00:29:13.503 sys 0m1.749s 00:29:13.503 15:16:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:13.503 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.503 ************************************ 00:29:13.503 END TEST fio_dif_1_default 00:29:13.503 ************************************ 00:29:13.503 15:16:27 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:13.503 15:16:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:13.503 15:16:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:13.503 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.503 ************************************ 00:29:13.503 START TEST fio_dif_1_multi_subsystems 00:29:13.503 ************************************ 00:29:13.503 15:16:27 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:29:13.503 15:16:27 -- target/dif.sh@92 -- # local files=1 00:29:13.503 15:16:27 -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:13.503 15:16:27 -- target/dif.sh@28 -- # local sub 00:29:13.503 15:16:27 -- target/dif.sh@30 -- # for sub in "$@" 00:29:13.503 15:16:27 -- target/dif.sh@31 -- # create_subsystem 0 00:29:13.503 15:16:27 -- target/dif.sh@18 -- # local sub_id=0 00:29:13.503 15:16:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:13.503 15:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.503 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.503 bdev_null0 00:29:13.503 15:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.503 15:16:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:13.503 15:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.503 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.503 15:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.503 15:16:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:13.503 15:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.503 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.503 15:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.503 15:16:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:13.503 15:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.503 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.503 [2024-04-18 15:16:27.719856] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.503 15:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.503 15:16:27 -- target/dif.sh@30 -- # for sub in "$@" 00:29:13.503 15:16:27 -- target/dif.sh@31 -- # create_subsystem 1 00:29:13.503 15:16:27 -- target/dif.sh@18 -- # local sub_id=1 00:29:13.503 15:16:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:13.503 15:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.503 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.503 bdev_null1 00:29:13.503 15:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.503 15:16:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:13.503 15:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.503 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.503 15:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.503 15:16:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:13.503 15:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.503 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.503 15:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.503 15:16:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:13.503 15:16:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.503 15:16:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.504 15:16:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.504 15:16:27 -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:13.504 15:16:27 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:13.504 15:16:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:13.504 15:16:27 -- nvmf/common.sh@521 -- # config=() 00:29:13.504 15:16:27 -- nvmf/common.sh@521 -- # local subsystem config 00:29:13.504 15:16:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:13.504 15:16:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:13.504 { 00:29:13.504 "params": { 00:29:13.504 "name": "Nvme$subsystem", 00:29:13.504 "trtype": "$TEST_TRANSPORT", 00:29:13.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.504 "adrfam": "ipv4", 00:29:13.504 "trsvcid": "$NVMF_PORT", 00:29:13.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.504 "hdgst": ${hdgst:-false}, 00:29:13.504 "ddgst": ${ddgst:-false} 00:29:13.504 }, 00:29:13.504 "method": "bdev_nvme_attach_controller" 00:29:13.504 } 00:29:13.504 EOF 00:29:13.504 )") 00:29:13.504 15:16:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:13.504 15:16:27 -- target/dif.sh@82 -- # gen_fio_conf 00:29:13.504 15:16:27 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:13.504 15:16:27 -- target/dif.sh@54 -- # local file 00:29:13.504 15:16:27 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:13.504 15:16:27 -- target/dif.sh@56 -- # cat 00:29:13.504 15:16:27 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:13.504 15:16:27 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:13.504 15:16:27 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:13.504 15:16:27 -- common/autotest_common.sh@1327 -- # shift 00:29:13.504 15:16:27 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:13.504 15:16:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:13.504 15:16:27 -- nvmf/common.sh@543 -- # cat 00:29:13.504 15:16:27 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:13.504 15:16:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:13.504 15:16:27 -- target/dif.sh@72 -- # (( file <= files )) 00:29:13.504 15:16:27 -- target/dif.sh@73 -- # cat 00:29:13.504 15:16:27 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:13.504 15:16:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:13.504 15:16:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:13.504 15:16:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:13.504 { 00:29:13.504 "params": { 00:29:13.504 "name": "Nvme$subsystem", 00:29:13.504 "trtype": "$TEST_TRANSPORT", 00:29:13.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.504 "adrfam": "ipv4", 00:29:13.504 "trsvcid": "$NVMF_PORT", 00:29:13.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.504 "hdgst": ${hdgst:-false}, 00:29:13.504 "ddgst": ${ddgst:-false} 00:29:13.504 }, 00:29:13.504 "method": "bdev_nvme_attach_controller" 00:29:13.504 } 00:29:13.504 EOF 00:29:13.504 )") 00:29:13.504 15:16:27 -- target/dif.sh@72 -- # (( file++ )) 00:29:13.504 15:16:27 -- target/dif.sh@72 -- # (( file <= files )) 00:29:13.504 15:16:27 -- nvmf/common.sh@543 -- # cat 00:29:13.504 15:16:27 -- nvmf/common.sh@545 -- # jq . 00:29:13.504 15:16:27 -- nvmf/common.sh@546 -- # IFS=, 00:29:13.504 15:16:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:13.504 "params": { 00:29:13.504 "name": "Nvme0", 00:29:13.504 "trtype": "tcp", 00:29:13.504 "traddr": "10.0.0.2", 00:29:13.504 "adrfam": "ipv4", 00:29:13.504 "trsvcid": "4420", 00:29:13.504 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:13.504 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:13.504 "hdgst": false, 00:29:13.504 "ddgst": false 00:29:13.504 }, 00:29:13.504 "method": "bdev_nvme_attach_controller" 00:29:13.504 },{ 00:29:13.504 "params": { 00:29:13.504 "name": "Nvme1", 00:29:13.504 "trtype": "tcp", 00:29:13.504 "traddr": "10.0.0.2", 00:29:13.504 "adrfam": "ipv4", 00:29:13.504 "trsvcid": "4420", 00:29:13.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:13.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:13.504 "hdgst": false, 00:29:13.504 "ddgst": false 00:29:13.504 }, 00:29:13.504 "method": "bdev_nvme_attach_controller" 00:29:13.504 }' 00:29:13.504 15:16:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:13.504 15:16:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:13.504 15:16:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:13.504 15:16:27 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:13.504 15:16:27 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:29:13.504 15:16:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:13.504 15:16:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:13.504 15:16:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:13.504 15:16:27 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:13.504 15:16:27 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:13.504 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:13.504 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:13.504 fio-3.35 00:29:13.504 Starting 2 threads 00:29:23.491 00:29:23.491 filename0: (groupid=0, jobs=1): err= 0: pid=90262: Thu Apr 18 15:16:38 2024 00:29:23.491 read: IOPS=236, BW=948KiB/s (970kB/s)(9504KiB/10030msec) 00:29:23.491 slat (usec): min=5, max=114, avg=10.09, stdev= 8.03 00:29:23.491 clat (usec): min=341, max=41804, avg=16852.83, stdev=19812.68 00:29:23.491 lat (usec): min=347, max=41823, avg=16862.93, stdev=19812.14 00:29:23.491 clat percentiles (usec): 00:29:23.491 | 1.00th=[ 351], 5.00th=[ 367], 10.00th=[ 388], 20.00th=[ 416], 00:29:23.491 | 30.00th=[ 482], 40.00th=[ 562], 50.00th=[ 742], 60.00th=[40633], 00:29:23.491 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:23.491 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:29:23.491 | 99.99th=[41681] 00:29:23.491 bw ( KiB/s): min= 512, max= 2720, per=50.63%, avg=948.80, stdev=614.42, samples=20 00:29:23.491 iops : min= 128, max= 680, avg=237.20, stdev=153.60, samples=20 00:29:23.491 lat (usec) : 500=32.62%, 750=18.06%, 1000=7.37% 00:29:23.491 lat (msec) : 2=1.39%, 4=0.17%, 50=40.40% 00:29:23.491 cpu : usr=92.24%, sys=7.31%, ctx=9, majf=0, minf=0 00:29:23.491 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:23.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.491 issued rwts: total=2376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.491 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:23.491 filename1: (groupid=0, jobs=1): err= 0: pid=90263: Thu Apr 18 15:16:38 2024 00:29:23.491 read: IOPS=231, BW=925KiB/s (947kB/s)(9280KiB/10032msec) 00:29:23.491 slat (nsec): min=5714, max=95179, avg=9637.47, stdev=7669.72 00:29:23.491 clat (usec): min=334, max=44133, avg=17265.22, stdev=19919.01 00:29:23.491 lat (usec): min=340, max=44151, avg=17274.86, stdev=19918.35 00:29:23.491 clat percentiles (usec): 00:29:23.491 | 1.00th=[ 347], 5.00th=[ 363], 10.00th=[ 379], 20.00th=[ 404], 00:29:23.491 | 30.00th=[ 445], 40.00th=[ 553], 50.00th=[ 766], 60.00th=[40633], 00:29:23.491 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:23.491 | 99.00th=[41681], 99.50th=[41681], 99.90th=[44303], 99.95th=[44303], 00:29:23.491 | 99.99th=[44303] 00:29:23.491 bw ( KiB/s): min= 544, max= 2560, per=49.46%, avg=926.40, stdev=462.97, samples=20 00:29:23.491 iops : min= 136, max= 640, avg=231.60, stdev=115.74, samples=20 00:29:23.491 lat (usec) : 500=34.91%, 750=14.48%, 1000=7.07% 00:29:23.491 lat (msec) : 2=2.16%, 50=41.38% 00:29:23.491 cpu : usr=92.62%, sys=6.95%, ctx=21, majf=0, minf=0 00:29:23.491 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:23.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.491 issued rwts: total=2320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.491 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:23.491 00:29:23.491 Run status group 0 (all jobs): 00:29:23.491 READ: bw=1872KiB/s (1917kB/s), 925KiB/s-948KiB/s (947kB/s-970kB/s), io=18.3MiB (19.2MB), run=10030-10032msec 00:29:23.491 15:16:38 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:23.491 15:16:38 -- target/dif.sh@43 -- # local sub 00:29:23.491 15:16:38 -- target/dif.sh@45 -- # for sub in "$@" 00:29:23.491 15:16:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:23.491 15:16:38 -- target/dif.sh@36 -- # local sub_id=0 00:29:23.491 15:16:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:23.491 15:16:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.491 15:16:38 -- common/autotest_common.sh@10 -- # set +x 00:29:23.491 15:16:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.491 15:16:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:23.491 15:16:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.491 15:16:38 -- common/autotest_common.sh@10 -- # set +x 00:29:23.491 15:16:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.491 15:16:38 -- target/dif.sh@45 -- # for sub in "$@" 00:29:23.491 15:16:38 -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:23.491 15:16:38 -- target/dif.sh@36 -- # local sub_id=1 00:29:23.491 15:16:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:23.491 15:16:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.491 15:16:38 -- common/autotest_common.sh@10 -- # set +x 00:29:23.491 15:16:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.491 15:16:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:23.491 15:16:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.491 15:16:38 -- common/autotest_common.sh@10 -- # set +x 00:29:23.491 15:16:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.491 00:29:23.491 real 0m11.266s 00:29:23.491 user 0m19.376s 00:29:23.491 sys 0m1.773s 00:29:23.491 15:16:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:23.491 15:16:38 -- common/autotest_common.sh@10 -- # set +x 00:29:23.491 ************************************ 00:29:23.491 END TEST fio_dif_1_multi_subsystems 00:29:23.491 ************************************ 00:29:23.491 15:16:39 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:23.491 15:16:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:23.491 15:16:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:23.491 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:29:23.491 ************************************ 00:29:23.491 START TEST fio_dif_rand_params 00:29:23.491 ************************************ 00:29:23.491 15:16:39 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:29:23.491 15:16:39 -- target/dif.sh@100 -- # local NULL_DIF 00:29:23.491 15:16:39 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:23.491 15:16:39 -- target/dif.sh@103 -- # NULL_DIF=3 00:29:23.491 15:16:39 -- target/dif.sh@103 -- # bs=128k 00:29:23.491 15:16:39 -- target/dif.sh@103 -- # numjobs=3 00:29:23.491 15:16:39 -- target/dif.sh@103 -- # iodepth=3 00:29:23.491 15:16:39 -- target/dif.sh@103 -- # runtime=5 00:29:23.491 15:16:39 -- target/dif.sh@105 -- # create_subsystems 0 00:29:23.492 15:16:39 -- target/dif.sh@28 -- # local sub 00:29:23.492 15:16:39 -- target/dif.sh@30 -- # for sub in "$@" 00:29:23.492 15:16:39 -- target/dif.sh@31 -- # create_subsystem 0 00:29:23.492 15:16:39 -- target/dif.sh@18 -- # local sub_id=0 00:29:23.492 15:16:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:23.492 15:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.492 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:29:23.492 bdev_null0 00:29:23.492 15:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.492 15:16:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:23.492 15:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.492 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:29:23.492 15:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.492 15:16:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:23.492 15:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.492 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:29:23.492 15:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.492 15:16:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:23.492 15:16:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.492 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:29:23.492 [2024-04-18 15:16:39.162339] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.492 15:16:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.492 15:16:39 -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:23.492 15:16:39 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:23.492 15:16:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:23.492 15:16:39 -- nvmf/common.sh@521 -- # config=() 00:29:23.492 15:16:39 -- nvmf/common.sh@521 -- # local subsystem config 00:29:23.492 15:16:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:23.492 15:16:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:23.492 { 00:29:23.492 "params": { 00:29:23.492 "name": "Nvme$subsystem", 00:29:23.492 "trtype": "$TEST_TRANSPORT", 00:29:23.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.492 "adrfam": "ipv4", 00:29:23.492 "trsvcid": "$NVMF_PORT", 00:29:23.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.492 "hdgst": ${hdgst:-false}, 00:29:23.492 "ddgst": ${ddgst:-false} 00:29:23.492 }, 00:29:23.492 "method": "bdev_nvme_attach_controller" 00:29:23.492 } 00:29:23.492 EOF 00:29:23.492 )") 00:29:23.492 15:16:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:23.492 15:16:39 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:23.492 15:16:39 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:23.492 15:16:39 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:23.492 15:16:39 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:23.492 15:16:39 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:23.492 15:16:39 -- common/autotest_common.sh@1327 -- # shift 00:29:23.492 15:16:39 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:23.492 15:16:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:23.492 15:16:39 -- nvmf/common.sh@543 -- # cat 00:29:23.492 15:16:39 -- target/dif.sh@82 -- # gen_fio_conf 00:29:23.492 15:16:39 -- target/dif.sh@54 -- # local file 00:29:23.492 15:16:39 -- target/dif.sh@56 -- # cat 00:29:23.492 15:16:39 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:23.492 15:16:39 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:23.492 15:16:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:23.492 15:16:39 -- nvmf/common.sh@545 -- # jq . 00:29:23.492 15:16:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:23.492 15:16:39 -- target/dif.sh@72 -- # (( file <= files )) 00:29:23.492 15:16:39 -- nvmf/common.sh@546 -- # IFS=, 00:29:23.492 15:16:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:23.492 "params": { 00:29:23.492 "name": "Nvme0", 00:29:23.492 "trtype": "tcp", 00:29:23.492 "traddr": "10.0.0.2", 00:29:23.492 "adrfam": "ipv4", 00:29:23.492 "trsvcid": "4420", 00:29:23.492 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:23.492 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:23.492 "hdgst": false, 00:29:23.492 "ddgst": false 00:29:23.492 }, 00:29:23.492 "method": "bdev_nvme_attach_controller" 00:29:23.492 }' 00:29:23.752 15:16:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:23.752 15:16:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:23.752 15:16:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:23.752 15:16:39 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:23.752 15:16:39 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:29:23.752 15:16:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:23.752 15:16:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:23.752 15:16:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:23.752 15:16:39 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:23.752 15:16:39 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:23.752 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:23.752 ... 00:29:23.752 fio-3.35 00:29:23.752 Starting 3 threads 00:29:30.319 00:29:30.319 filename0: (groupid=0, jobs=1): err= 0: pid=90425: Thu Apr 18 15:16:44 2024 00:29:30.319 read: IOPS=256, BW=32.0MiB/s (33.6MB/s)(160MiB/5003msec) 00:29:30.319 slat (nsec): min=5735, max=54434, avg=9712.78, stdev=5198.17 00:29:30.319 clat (usec): min=3330, max=54549, avg=11689.71, stdev=3645.19 00:29:30.319 lat (usec): min=3336, max=54555, avg=11699.42, stdev=3645.29 00:29:30.319 clat percentiles (usec): 00:29:30.319 | 1.00th=[ 3359], 5.00th=[ 7177], 10.00th=[ 7832], 20.00th=[10945], 00:29:30.319 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:29:30.319 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13566], 00:29:30.319 | 99.00th=[14091], 99.50th=[14353], 99.90th=[54264], 99.95th=[54789], 00:29:30.319 | 99.99th=[54789] 00:29:30.319 bw ( KiB/s): min=27648, max=38400, per=29.22%, avg=32853.33, stdev=2789.70, samples=9 00:29:30.319 iops : min= 216, max= 300, avg=256.67, stdev=21.79, samples=9 00:29:30.319 lat (msec) : 4=2.81%, 10=14.99%, 20=81.73%, 100=0.47% 00:29:30.319 cpu : usr=91.28%, sys=7.56%, ctx=10, majf=0, minf=0 00:29:30.319 IO depths : 1=32.8%, 2=67.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:30.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:30.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:30.319 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:30.319 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:30.319 filename0: (groupid=0, jobs=1): err= 0: pid=90426: Thu Apr 18 15:16:44 2024 00:29:30.319 read: IOPS=321, BW=40.2MiB/s (42.1MB/s)(201MiB/5006msec) 00:29:30.319 slat (nsec): min=5811, max=72862, avg=12402.30, stdev=4782.14 00:29:30.319 clat (usec): min=4524, max=51179, avg=9322.24, stdev=5067.69 00:29:30.319 lat (usec): min=4532, max=51185, avg=9334.64, stdev=5067.59 00:29:30.319 clat percentiles (usec): 00:29:30.319 | 1.00th=[ 5211], 5.00th=[ 6259], 10.00th=[ 7242], 20.00th=[ 8225], 00:29:30.319 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:29:30.319 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10159], 00:29:30.319 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50594], 99.95th=[51119], 00:29:30.319 | 99.99th=[51119] 00:29:30.319 bw ( KiB/s): min=35328, max=45056, per=36.88%, avg=41462.78, stdev=3240.82, samples=9 00:29:30.319 iops : min= 276, max= 352, avg=323.89, stdev=25.32, samples=9 00:29:30.319 lat (msec) : 10=93.47%, 20=5.04%, 50=0.75%, 100=0.75% 00:29:30.319 cpu : usr=90.33%, sys=8.37%, ctx=9, majf=0, minf=0 00:29:30.319 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:30.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:30.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:30.319 issued rwts: total=1608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:30.319 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:30.319 filename0: (groupid=0, jobs=1): err= 0: pid=90427: Thu Apr 18 15:16:44 2024 00:29:30.319 read: IOPS=301, BW=37.7MiB/s (39.5MB/s)(189MiB/5004msec) 00:29:30.319 slat (nsec): min=5786, max=33157, avg=11239.52, stdev=4110.51 00:29:30.319 clat (usec): min=4420, max=52309, avg=9938.01, stdev=4971.25 00:29:30.319 lat (usec): min=4425, max=52340, avg=9949.25, stdev=4971.30 00:29:30.319 clat percentiles (usec): 00:29:30.319 | 1.00th=[ 5211], 5.00th=[ 6128], 10.00th=[ 7701], 20.00th=[ 8717], 00:29:30.319 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:29:30.319 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10552], 95.00th=[10945], 00:29:30.319 | 99.00th=[49546], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:29:30.319 | 99.99th=[52167] 00:29:30.319 bw ( KiB/s): min=33792, max=44032, per=34.05%, avg=38286.22, stdev=2983.01, samples=9 00:29:30.319 iops : min= 264, max= 344, avg=299.11, stdev=23.30, samples=9 00:29:30.319 lat (msec) : 10=64.92%, 20=33.69%, 50=0.60%, 100=0.80% 00:29:30.319 cpu : usr=91.05%, sys=7.80%, ctx=11, majf=0, minf=0 00:29:30.319 IO depths : 1=3.8%, 2=96.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:30.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:30.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:30.319 issued rwts: total=1508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:30.319 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:30.319 00:29:30.319 Run status group 0 (all jobs): 00:29:30.319 READ: bw=110MiB/s (115MB/s), 32.0MiB/s-40.2MiB/s (33.6MB/s-42.1MB/s), io=550MiB (576MB), run=5003-5006msec 00:29:30.319 15:16:45 -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:30.319 15:16:45 -- target/dif.sh@43 -- # local sub 00:29:30.319 15:16:45 -- target/dif.sh@45 -- # for sub in "$@" 00:29:30.319 15:16:45 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:30.319 15:16:45 -- target/dif.sh@36 -- # local sub_id=0 00:29:30.319 15:16:45 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:30.319 15:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.319 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:30.320 15:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.320 15:16:45 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:30.320 15:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.320 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:30.320 15:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.320 15:16:45 -- target/dif.sh@109 -- # NULL_DIF=2 00:29:30.320 15:16:45 -- target/dif.sh@109 -- # bs=4k 00:29:30.320 15:16:45 -- target/dif.sh@109 -- # numjobs=8 00:29:30.320 15:16:45 -- target/dif.sh@109 -- # iodepth=16 00:29:30.320 15:16:45 -- target/dif.sh@109 -- # runtime= 00:29:30.320 15:16:45 -- target/dif.sh@109 -- # files=2 00:29:30.320 15:16:45 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:30.320 15:16:45 -- target/dif.sh@28 -- # local sub 00:29:30.320 15:16:45 -- target/dif.sh@30 -- # for sub in "$@" 00:29:30.320 15:16:45 -- target/dif.sh@31 -- # create_subsystem 0 00:29:30.320 15:16:45 -- target/dif.sh@18 -- # local sub_id=0 00:29:30.320 15:16:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:30.320 15:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.320 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:30.320 bdev_null0 00:29:30.320 15:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.320 15:16:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:30.320 15:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.320 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:30.320 15:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.320 15:16:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:30.320 15:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.320 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:30.320 15:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.320 15:16:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:30.320 15:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.320 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:30.320 [2024-04-18 15:16:45.251052] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.320 15:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.320 15:16:45 -- target/dif.sh@30 -- # for sub in "$@" 00:29:30.320 15:16:45 -- target/dif.sh@31 -- # create_subsystem 1 00:29:30.320 15:16:45 -- target/dif.sh@18 -- # local sub_id=1 00:29:30.320 15:16:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:30.320 15:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.320 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:30.320 bdev_null1 00:29:30.320 15:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.320 15:16:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:30.320 15:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.320 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:30.320 15:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.320 15:16:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:30.320 15:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.320 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:30.320 15:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.320 15:16:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.320 15:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.320 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:30.320 15:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.320 15:16:45 -- target/dif.sh@30 -- # for sub in "$@" 00:29:30.320 15:16:45 -- target/dif.sh@31 -- # create_subsystem 2 00:29:30.320 15:16:45 -- target/dif.sh@18 -- # local sub_id=2 00:29:30.320 15:16:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:30.320 15:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.320 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:30.320 bdev_null2 00:29:30.320 15:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.320 15:16:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:30.320 15:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.320 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:30.320 15:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.320 15:16:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:30.320 15:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.320 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:30.320 15:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.320 15:16:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:30.320 15:16:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.320 15:16:45 -- common/autotest_common.sh@10 -- # set +x 00:29:30.320 15:16:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.320 15:16:45 -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:30.320 15:16:45 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:30.320 15:16:45 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:30.320 15:16:45 -- nvmf/common.sh@521 -- # config=() 00:29:30.320 15:16:45 -- nvmf/common.sh@521 -- # local subsystem config 00:29:30.320 15:16:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:30.320 15:16:45 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:30.320 15:16:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:30.320 { 00:29:30.320 "params": { 00:29:30.320 "name": "Nvme$subsystem", 00:29:30.320 "trtype": "$TEST_TRANSPORT", 00:29:30.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:30.320 "adrfam": "ipv4", 00:29:30.320 "trsvcid": "$NVMF_PORT", 00:29:30.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:30.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:30.320 "hdgst": ${hdgst:-false}, 00:29:30.320 "ddgst": ${ddgst:-false} 00:29:30.320 }, 00:29:30.320 "method": "bdev_nvme_attach_controller" 00:29:30.320 } 00:29:30.320 EOF 00:29:30.320 )") 00:29:30.320 15:16:45 -- target/dif.sh@82 -- # gen_fio_conf 00:29:30.320 15:16:45 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:30.320 15:16:45 -- target/dif.sh@54 -- # local file 00:29:30.320 15:16:45 -- target/dif.sh@56 -- # cat 00:29:30.320 15:16:45 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:30.320 15:16:45 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:30.320 15:16:45 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:30.320 15:16:45 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:30.320 15:16:45 -- common/autotest_common.sh@1327 -- # shift 00:29:30.320 15:16:45 -- nvmf/common.sh@543 -- # cat 00:29:30.320 15:16:45 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:30.320 15:16:45 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:30.320 15:16:45 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:30.320 15:16:45 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:30.320 15:16:45 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:30.320 15:16:45 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:30.320 15:16:45 -- target/dif.sh@72 -- # (( file <= files )) 00:29:30.320 15:16:45 -- target/dif.sh@73 -- # cat 00:29:30.320 15:16:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:30.320 15:16:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:30.320 { 00:29:30.320 "params": { 00:29:30.320 "name": "Nvme$subsystem", 00:29:30.320 "trtype": "$TEST_TRANSPORT", 00:29:30.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:30.320 "adrfam": "ipv4", 00:29:30.320 "trsvcid": "$NVMF_PORT", 00:29:30.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:30.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:30.320 "hdgst": ${hdgst:-false}, 00:29:30.320 "ddgst": ${ddgst:-false} 00:29:30.320 }, 00:29:30.320 "method": "bdev_nvme_attach_controller" 00:29:30.320 } 00:29:30.320 EOF 00:29:30.320 )") 00:29:30.320 15:16:45 -- target/dif.sh@72 -- # (( file++ )) 00:29:30.320 15:16:45 -- target/dif.sh@72 -- # (( file <= files )) 00:29:30.320 15:16:45 -- target/dif.sh@73 -- # cat 00:29:30.320 15:16:45 -- nvmf/common.sh@543 -- # cat 00:29:30.320 15:16:45 -- target/dif.sh@72 -- # (( file++ )) 00:29:30.320 15:16:45 -- target/dif.sh@72 -- # (( file <= files )) 00:29:30.320 15:16:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:30.320 15:16:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:30.320 { 00:29:30.320 "params": { 00:29:30.320 "name": "Nvme$subsystem", 00:29:30.320 "trtype": "$TEST_TRANSPORT", 00:29:30.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:30.320 "adrfam": "ipv4", 00:29:30.320 "trsvcid": "$NVMF_PORT", 00:29:30.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:30.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:30.320 "hdgst": ${hdgst:-false}, 00:29:30.320 "ddgst": ${ddgst:-false} 00:29:30.320 }, 00:29:30.320 "method": "bdev_nvme_attach_controller" 00:29:30.320 } 00:29:30.320 EOF 00:29:30.320 )") 00:29:30.320 15:16:45 -- nvmf/common.sh@543 -- # cat 00:29:30.320 15:16:45 -- nvmf/common.sh@545 -- # jq . 00:29:30.320 15:16:45 -- nvmf/common.sh@546 -- # IFS=, 00:29:30.320 15:16:45 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:30.320 "params": { 00:29:30.320 "name": "Nvme0", 00:29:30.320 "trtype": "tcp", 00:29:30.320 "traddr": "10.0.0.2", 00:29:30.320 "adrfam": "ipv4", 00:29:30.320 "trsvcid": "4420", 00:29:30.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:30.320 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:30.320 "hdgst": false, 00:29:30.320 "ddgst": false 00:29:30.320 }, 00:29:30.320 "method": "bdev_nvme_attach_controller" 00:29:30.320 },{ 00:29:30.320 "params": { 00:29:30.320 "name": "Nvme1", 00:29:30.320 "trtype": "tcp", 00:29:30.320 "traddr": "10.0.0.2", 00:29:30.320 "adrfam": "ipv4", 00:29:30.320 "trsvcid": "4420", 00:29:30.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:30.321 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:30.321 "hdgst": false, 00:29:30.321 "ddgst": false 00:29:30.321 }, 00:29:30.321 "method": "bdev_nvme_attach_controller" 00:29:30.321 },{ 00:29:30.321 "params": { 00:29:30.321 "name": "Nvme2", 00:29:30.321 "trtype": "tcp", 00:29:30.321 "traddr": "10.0.0.2", 00:29:30.321 "adrfam": "ipv4", 00:29:30.321 "trsvcid": "4420", 00:29:30.321 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:30.321 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:30.321 "hdgst": false, 00:29:30.321 "ddgst": false 00:29:30.321 }, 00:29:30.321 "method": "bdev_nvme_attach_controller" 00:29:30.321 }' 00:29:30.321 15:16:45 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:30.321 15:16:45 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:30.321 15:16:45 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:30.321 15:16:45 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:30.321 15:16:45 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:29:30.321 15:16:45 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:30.321 15:16:45 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:30.321 15:16:45 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:30.321 15:16:45 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:30.321 15:16:45 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:30.321 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:30.321 ... 00:29:30.321 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:30.321 ... 00:29:30.321 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:30.321 ... 00:29:30.321 fio-3.35 00:29:30.321 Starting 24 threads 00:29:42.524 00:29:42.524 filename0: (groupid=0, jobs=1): err= 0: pid=90527: Thu Apr 18 15:16:56 2024 00:29:42.524 read: IOPS=253, BW=1014KiB/s (1038kB/s)(9.94MiB/10036msec) 00:29:42.524 slat (usec): min=5, max=8034, avg=16.69, stdev=224.75 00:29:42.524 clat (msec): min=25, max=159, avg=63.00, stdev=20.31 00:29:42.524 lat (msec): min=25, max=159, avg=63.02, stdev=20.31 00:29:42.524 clat percentiles (msec): 00:29:42.524 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:29:42.524 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 65], 00:29:42.524 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 92], 95.00th=[ 103], 00:29:42.524 | 99.00th=[ 126], 99.50th=[ 130], 99.90th=[ 153], 99.95th=[ 153], 00:29:42.524 | 99.99th=[ 161] 00:29:42.524 bw ( KiB/s): min= 720, max= 1200, per=4.30%, avg=1010.55, stdev=150.69, samples=20 00:29:42.524 iops : min= 180, max= 300, avg=252.60, stdev=37.66, samples=20 00:29:42.524 lat (msec) : 50=30.46%, 100=64.43%, 250=5.11% 00:29:42.524 cpu : usr=33.06%, sys=2.02%, ctx=949, majf=0, minf=9 00:29:42.524 IO depths : 1=0.7%, 2=1.8%, 4=8.8%, 8=75.7%, 16=13.0%, 32=0.0%, >=64=0.0% 00:29:42.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.524 complete : 0=0.0%, 4=89.7%, 8=5.9%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.524 issued rwts: total=2544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.524 filename0: (groupid=0, jobs=1): err= 0: pid=90528: Thu Apr 18 15:16:56 2024 00:29:42.524 read: IOPS=238, BW=956KiB/s (979kB/s)(9584KiB/10028msec) 00:29:42.524 slat (usec): min=4, max=11046, avg=18.89, stdev=278.57 00:29:42.524 clat (msec): min=22, max=159, avg=66.78, stdev=21.74 00:29:42.524 lat (msec): min=22, max=159, avg=66.79, stdev=21.74 00:29:42.524 clat percentiles (msec): 00:29:42.524 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 00:29:42.524 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 69], 00:29:42.524 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 100], 95.00th=[ 107], 00:29:42.524 | 99.00th=[ 133], 99.50th=[ 133], 99.90th=[ 161], 99.95th=[ 161], 00:29:42.524 | 99.99th=[ 161] 00:29:42.524 bw ( KiB/s): min= 688, max= 1328, per=4.05%, avg=952.10, stdev=157.08, samples=20 00:29:42.524 iops : min= 172, max= 332, avg=238.00, stdev=39.23, samples=20 00:29:42.524 lat (msec) : 50=23.96%, 100=66.44%, 250=9.60% 00:29:42.524 cpu : usr=31.48%, sys=1.47%, ctx=935, majf=0, minf=9 00:29:42.524 IO depths : 1=0.6%, 2=1.5%, 4=8.3%, 8=76.7%, 16=12.9%, 32=0.0%, >=64=0.0% 00:29:42.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.524 complete : 0=0.0%, 4=89.4%, 8=6.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.524 issued rwts: total=2396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.524 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.524 filename0: (groupid=0, jobs=1): err= 0: pid=90529: Thu Apr 18 15:16:56 2024 00:29:42.525 read: IOPS=224, BW=897KiB/s (919kB/s)(8992KiB/10020msec) 00:29:42.525 slat (usec): min=4, max=5029, avg=14.50, stdev=135.55 00:29:42.525 clat (msec): min=29, max=142, avg=71.19, stdev=21.30 00:29:42.525 lat (msec): min=29, max=142, avg=71.20, stdev=21.30 00:29:42.525 clat percentiles (msec): 00:29:42.525 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 54], 00:29:42.525 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 75], 00:29:42.525 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 101], 95.00th=[ 109], 00:29:42.525 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 144], 99.95th=[ 144], 00:29:42.525 | 99.99th=[ 144] 00:29:42.525 bw ( KiB/s): min= 688, max= 1080, per=3.80%, avg=892.85, stdev=118.87, samples=20 00:29:42.525 iops : min= 172, max= 270, avg=223.20, stdev=29.71, samples=20 00:29:42.525 lat (msec) : 50=14.41%, 100=76.60%, 250=8.99% 00:29:42.525 cpu : usr=42.08%, sys=2.29%, ctx=1377, majf=0, minf=9 00:29:42.525 IO depths : 1=1.3%, 2=2.8%, 4=10.9%, 8=72.7%, 16=12.2%, 32=0.0%, >=64=0.0% 00:29:42.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.525 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.525 issued rwts: total=2248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.525 filename0: (groupid=0, jobs=1): err= 0: pid=90530: Thu Apr 18 15:16:56 2024 00:29:42.525 read: IOPS=287, BW=1150KiB/s (1178kB/s)(11.3MiB/10055msec) 00:29:42.525 slat (usec): min=5, max=3987, avg=13.00, stdev=92.80 00:29:42.525 clat (msec): min=4, max=139, avg=55.48, stdev=17.99 00:29:42.525 lat (msec): min=4, max=139, avg=55.50, stdev=17.99 00:29:42.525 clat percentiles (msec): 00:29:42.525 | 1.00th=[ 8], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 41], 00:29:42.525 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 59], 00:29:42.525 | 70.00th=[ 62], 80.00th=[ 68], 90.00th=[ 80], 95.00th=[ 89], 00:29:42.525 | 99.00th=[ 112], 99.50th=[ 125], 99.90th=[ 140], 99.95th=[ 140], 00:29:42.525 | 99.99th=[ 140] 00:29:42.525 bw ( KiB/s): min= 864, max= 1402, per=4.89%, avg=1149.55, stdev=166.99, samples=20 00:29:42.525 iops : min= 216, max= 350, avg=287.35, stdev=41.69, samples=20 00:29:42.525 lat (msec) : 10=1.11%, 20=0.38%, 50=40.54%, 100=55.86%, 250=2.11% 00:29:42.525 cpu : usr=42.39%, sys=2.41%, ctx=1349, majf=0, minf=9 00:29:42.525 IO depths : 1=1.0%, 2=2.2%, 4=8.8%, 8=75.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:29:42.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.525 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.525 issued rwts: total=2891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.525 filename0: (groupid=0, jobs=1): err= 0: pid=90531: Thu Apr 18 15:16:56 2024 00:29:42.525 read: IOPS=260, BW=1043KiB/s (1068kB/s)(10.2MiB/10025msec) 00:29:42.525 slat (usec): min=5, max=9015, avg=17.17, stdev=238.70 00:29:42.525 clat (msec): min=21, max=129, avg=61.20, stdev=19.68 00:29:42.525 lat (msec): min=21, max=129, avg=61.21, stdev=19.69 00:29:42.525 clat percentiles (msec): 00:29:42.525 | 1.00th=[ 29], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 43], 00:29:42.525 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 64], 00:29:42.525 | 70.00th=[ 69], 80.00th=[ 78], 90.00th=[ 91], 95.00th=[ 97], 00:29:42.525 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 130], 00:29:42.525 | 99.99th=[ 130] 00:29:42.525 bw ( KiB/s): min= 728, max= 1328, per=4.44%, avg=1043.30, stdev=152.17, samples=20 00:29:42.525 iops : min= 182, max= 332, avg=260.75, stdev=38.02, samples=20 00:29:42.525 lat (msec) : 50=32.47%, 100=64.09%, 250=3.44% 00:29:42.525 cpu : usr=42.36%, sys=2.27%, ctx=1502, majf=0, minf=9 00:29:42.525 IO depths : 1=1.2%, 2=2.8%, 4=11.3%, 8=72.5%, 16=12.2%, 32=0.0%, >=64=0.0% 00:29:42.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.525 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.525 issued rwts: total=2615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.525 filename0: (groupid=0, jobs=1): err= 0: pid=90532: Thu Apr 18 15:16:56 2024 00:29:42.525 read: IOPS=259, BW=1036KiB/s (1061kB/s)(10.1MiB/10026msec) 00:29:42.525 slat (usec): min=5, max=8017, avg=13.62, stdev=157.19 00:29:42.525 clat (msec): min=26, max=131, avg=61.63, stdev=18.73 00:29:42.525 lat (msec): min=26, max=131, avg=61.65, stdev=18.72 00:29:42.525 clat percentiles (msec): 00:29:42.525 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 46], 00:29:42.525 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 64], 00:29:42.525 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 89], 95.00th=[ 96], 00:29:42.525 | 99.00th=[ 113], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:29:42.525 | 99.99th=[ 132] 00:29:42.525 bw ( KiB/s): min= 864, max= 1418, per=4.39%, avg=1032.00, stdev=138.33, samples=20 00:29:42.525 iops : min= 216, max= 354, avg=257.95, stdev=34.52, samples=20 00:29:42.525 lat (msec) : 50=34.23%, 100=61.80%, 250=3.97% 00:29:42.525 cpu : usr=31.47%, sys=1.45%, ctx=933, majf=0, minf=9 00:29:42.525 IO depths : 1=0.7%, 2=1.6%, 4=8.0%, 8=76.7%, 16=13.1%, 32=0.0%, >=64=0.0% 00:29:42.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.525 complete : 0=0.0%, 4=89.4%, 8=6.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.525 issued rwts: total=2597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.525 filename0: (groupid=0, jobs=1): err= 0: pid=90533: Thu Apr 18 15:16:56 2024 00:29:42.525 read: IOPS=231, BW=925KiB/s (947kB/s)(9276KiB/10033msec) 00:29:42.525 slat (usec): min=2, max=7972, avg=15.43, stdev=185.03 00:29:42.525 clat (msec): min=25, max=160, avg=69.05, stdev=22.03 00:29:42.525 lat (msec): min=25, max=160, avg=69.07, stdev=22.04 00:29:42.525 clat percentiles (msec): 00:29:42.525 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 53], 00:29:42.525 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:29:42.525 | 70.00th=[ 75], 80.00th=[ 86], 90.00th=[ 103], 95.00th=[ 114], 00:29:42.525 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 161], 99.95th=[ 161], 00:29:42.525 | 99.99th=[ 161] 00:29:42.525 bw ( KiB/s): min= 640, max= 1200, per=3.92%, avg=920.90, stdev=175.95, samples=20 00:29:42.525 iops : min= 160, max= 300, avg=230.20, stdev=43.99, samples=20 00:29:42.525 lat (msec) : 50=16.60%, 100=73.18%, 250=10.22% 00:29:42.525 cpu : usr=46.43%, sys=2.23%, ctx=1278, majf=0, minf=9 00:29:42.525 IO depths : 1=2.3%, 2=4.8%, 4=13.2%, 8=68.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:29:42.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.525 complete : 0=0.0%, 4=90.8%, 8=4.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.525 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.525 filename0: (groupid=0, jobs=1): err= 0: pid=90534: Thu Apr 18 15:16:56 2024 00:29:42.525 read: IOPS=274, BW=1100KiB/s (1126kB/s)(10.8MiB/10055msec) 00:29:42.525 slat (usec): min=5, max=4756, avg=16.50, stdev=163.33 00:29:42.525 clat (msec): min=2, max=149, avg=58.08, stdev=24.56 00:29:42.525 lat (msec): min=2, max=149, avg=58.09, stdev=24.56 00:29:42.525 clat percentiles (msec): 00:29:42.525 | 1.00th=[ 5], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 40], 00:29:42.525 | 30.00th=[ 43], 40.00th=[ 48], 50.00th=[ 54], 60.00th=[ 61], 00:29:42.525 | 70.00th=[ 66], 80.00th=[ 74], 90.00th=[ 95], 95.00th=[ 108], 00:29:42.525 | 99.00th=[ 124], 99.50th=[ 136], 99.90th=[ 150], 99.95th=[ 150], 00:29:42.525 | 99.99th=[ 150] 00:29:42.525 bw ( KiB/s): min= 560, max= 2163, per=4.68%, avg=1098.40, stdev=338.13, samples=20 00:29:42.525 iops : min= 140, max= 540, avg=274.55, stdev=84.40, samples=20 00:29:42.525 lat (msec) : 4=0.83%, 10=2.06%, 20=0.58%, 50=39.58%, 100=49.31% 00:29:42.525 lat (msec) : 250=7.63% 00:29:42.525 cpu : usr=41.78%, sys=1.93%, ctx=1596, majf=0, minf=10 00:29:42.525 IO depths : 1=1.6%, 2=3.4%, 4=11.3%, 8=72.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:29:42.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.525 complete : 0=0.0%, 4=90.4%, 8=4.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.525 issued rwts: total=2764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.525 filename1: (groupid=0, jobs=1): err= 0: pid=90535: Thu Apr 18 15:16:56 2024 00:29:42.525 read: IOPS=218, BW=873KiB/s (894kB/s)(8740KiB/10011msec) 00:29:42.525 slat (usec): min=2, max=8018, avg=18.06, stdev=242.24 00:29:42.525 clat (msec): min=23, max=167, avg=73.20, stdev=22.67 00:29:42.525 lat (msec): min=23, max=167, avg=73.22, stdev=22.68 00:29:42.525 clat percentiles (msec): 00:29:42.525 | 1.00th=[ 31], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 57], 00:29:42.525 | 30.00th=[ 60], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 74], 00:29:42.525 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 107], 95.00th=[ 117], 00:29:42.525 | 99.00th=[ 140], 99.50th=[ 153], 99.90th=[ 167], 99.95th=[ 167], 00:29:42.525 | 99.99th=[ 167] 00:29:42.525 bw ( KiB/s): min= 640, max= 1256, per=3.70%, avg=870.37, stdev=138.23, samples=19 00:29:42.525 iops : min= 160, max= 314, avg=217.58, stdev=34.57, samples=19 00:29:42.525 lat (msec) : 50=13.59%, 100=75.01%, 250=11.40% 00:29:42.525 cpu : usr=33.09%, sys=1.51%, ctx=932, majf=0, minf=9 00:29:42.525 IO depths : 1=2.2%, 2=4.9%, 4=14.3%, 8=67.6%, 16=11.0%, 32=0.0%, >=64=0.0% 00:29:42.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.525 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.525 issued rwts: total=2185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.525 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.525 filename1: (groupid=0, jobs=1): err= 0: pid=90536: Thu Apr 18 15:16:56 2024 00:29:42.525 read: IOPS=282, BW=1130KiB/s (1157kB/s)(11.1MiB/10029msec) 00:29:42.525 slat (usec): min=4, max=4024, avg=12.62, stdev=97.61 00:29:42.525 clat (msec): min=23, max=127, avg=56.58, stdev=16.57 00:29:42.525 lat (msec): min=23, max=127, avg=56.59, stdev=16.57 00:29:42.525 clat percentiles (msec): 00:29:42.525 | 1.00th=[ 29], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 43], 00:29:42.525 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 60], 00:29:42.525 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 77], 95.00th=[ 86], 00:29:42.525 | 99.00th=[ 112], 99.50th=[ 118], 99.90th=[ 128], 99.95th=[ 128], 00:29:42.525 | 99.99th=[ 128] 00:29:42.525 bw ( KiB/s): min= 816, max= 1408, per=4.79%, avg=1126.40, stdev=149.66, samples=20 00:29:42.526 iops : min= 204, max= 352, avg=281.60, stdev=37.41, samples=20 00:29:42.526 lat (msec) : 50=39.76%, 100=57.56%, 250=2.68% 00:29:42.526 cpu : usr=44.02%, sys=2.10%, ctx=1237, majf=0, minf=9 00:29:42.526 IO depths : 1=0.2%, 2=0.4%, 4=5.8%, 8=80.2%, 16=13.5%, 32=0.0%, >=64=0.0% 00:29:42.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 complete : 0=0.0%, 4=89.0%, 8=6.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 issued rwts: total=2832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.526 filename1: (groupid=0, jobs=1): err= 0: pid=90537: Thu Apr 18 15:16:56 2024 00:29:42.526 read: IOPS=261, BW=1046KiB/s (1071kB/s)(10.3MiB/10052msec) 00:29:42.526 slat (usec): min=5, max=11054, avg=20.29, stdev=299.11 00:29:42.526 clat (msec): min=2, max=128, avg=61.05, stdev=23.70 00:29:42.526 lat (msec): min=2, max=129, avg=61.07, stdev=23.70 00:29:42.526 clat percentiles (msec): 00:29:42.526 | 1.00th=[ 4], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 44], 00:29:42.526 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 64], 00:29:42.526 | 70.00th=[ 70], 80.00th=[ 81], 90.00th=[ 93], 95.00th=[ 105], 00:29:42.526 | 99.00th=[ 126], 99.50th=[ 129], 99.90th=[ 130], 99.95th=[ 130], 00:29:42.526 | 99.99th=[ 130] 00:29:42.526 bw ( KiB/s): min= 720, max= 2176, per=4.45%, avg=1044.80, stdev=312.02, samples=20 00:29:42.526 iops : min= 180, max= 544, avg=261.20, stdev=78.01, samples=20 00:29:42.526 lat (msec) : 4=1.29%, 10=1.75%, 20=1.22%, 50=27.51%, 100=62.52% 00:29:42.526 lat (msec) : 250=5.71% 00:29:42.526 cpu : usr=34.04%, sys=1.61%, ctx=982, majf=0, minf=9 00:29:42.526 IO depths : 1=0.8%, 2=1.8%, 4=8.7%, 8=75.6%, 16=13.1%, 32=0.0%, >=64=0.0% 00:29:42.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 complete : 0=0.0%, 4=89.8%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 issued rwts: total=2628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.526 filename1: (groupid=0, jobs=1): err= 0: pid=90538: Thu Apr 18 15:16:56 2024 00:29:42.526 read: IOPS=264, BW=1057KiB/s (1082kB/s)(10.4MiB/10030msec) 00:29:42.526 slat (usec): min=2, max=8022, avg=14.27, stdev=170.46 00:29:42.526 clat (msec): min=26, max=140, avg=60.42, stdev=19.25 00:29:42.526 lat (msec): min=26, max=140, avg=60.44, stdev=19.25 00:29:42.526 clat percentiles (msec): 00:29:42.526 | 1.00th=[ 32], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 43], 00:29:42.526 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 62], 00:29:42.526 | 70.00th=[ 69], 80.00th=[ 75], 90.00th=[ 87], 95.00th=[ 97], 00:29:42.526 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 140], 99.95th=[ 140], 00:29:42.526 | 99.99th=[ 140] 00:29:42.526 bw ( KiB/s): min= 768, max= 1376, per=4.48%, avg=1053.45, stdev=169.44, samples=20 00:29:42.526 iops : min= 192, max= 344, avg=263.35, stdev=42.36, samples=20 00:29:42.526 lat (msec) : 50=34.34%, 100=61.58%, 250=4.08% 00:29:42.526 cpu : usr=39.98%, sys=1.95%, ctx=1201, majf=0, minf=9 00:29:42.526 IO depths : 1=1.0%, 2=2.1%, 4=8.5%, 8=76.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:29:42.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 complete : 0=0.0%, 4=89.6%, 8=5.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 issued rwts: total=2650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.526 filename1: (groupid=0, jobs=1): err= 0: pid=90539: Thu Apr 18 15:16:56 2024 00:29:42.526 read: IOPS=246, BW=985KiB/s (1008kB/s)(9872KiB/10026msec) 00:29:42.526 slat (nsec): min=5393, max=41672, avg=10549.55, stdev=4726.59 00:29:42.526 clat (msec): min=24, max=149, avg=64.88, stdev=20.72 00:29:42.526 lat (msec): min=24, max=149, avg=64.89, stdev=20.72 00:29:42.526 clat percentiles (msec): 00:29:42.526 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 47], 00:29:42.526 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 69], 00:29:42.526 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 100], 00:29:42.526 | 99.00th=[ 122], 99.50th=[ 132], 99.90th=[ 150], 99.95th=[ 150], 00:29:42.526 | 99.99th=[ 150] 00:29:42.526 bw ( KiB/s): min= 640, max= 1256, per=4.19%, avg=983.30, stdev=171.01, samples=20 00:29:42.526 iops : min= 160, max= 314, avg=245.80, stdev=42.71, samples=20 00:29:42.526 lat (msec) : 50=27.96%, 100=67.59%, 250=4.46% 00:29:42.526 cpu : usr=32.09%, sys=1.86%, ctx=905, majf=0, minf=9 00:29:42.526 IO depths : 1=0.7%, 2=1.7%, 4=7.8%, 8=76.6%, 16=13.2%, 32=0.0%, >=64=0.0% 00:29:42.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 complete : 0=0.0%, 4=89.6%, 8=6.3%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 issued rwts: total=2468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.526 filename1: (groupid=0, jobs=1): err= 0: pid=90540: Thu Apr 18 15:16:56 2024 00:29:42.526 read: IOPS=248, BW=994KiB/s (1018kB/s)(9988KiB/10044msec) 00:29:42.526 slat (usec): min=4, max=8018, avg=20.38, stdev=278.22 00:29:42.526 clat (msec): min=6, max=122, avg=64.26, stdev=21.28 00:29:42.526 lat (msec): min=6, max=122, avg=64.28, stdev=21.28 00:29:42.526 clat percentiles (msec): 00:29:42.526 | 1.00th=[ 7], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 48], 00:29:42.526 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 69], 00:29:42.526 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 107], 00:29:42.526 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 124], 99.95th=[ 124], 00:29:42.526 | 99.99th=[ 124] 00:29:42.526 bw ( KiB/s): min= 720, max= 1608, per=4.22%, avg=992.30, stdev=203.46, samples=20 00:29:42.526 iops : min= 180, max= 402, avg=248.05, stdev=50.88, samples=20 00:29:42.526 lat (msec) : 10=1.28%, 20=0.64%, 50=25.83%, 100=65.56%, 250=6.69% 00:29:42.526 cpu : usr=32.88%, sys=1.42%, ctx=937, majf=0, minf=9 00:29:42.526 IO depths : 1=0.6%, 2=1.5%, 4=8.0%, 8=77.0%, 16=12.9%, 32=0.0%, >=64=0.0% 00:29:42.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 complete : 0=0.0%, 4=89.4%, 8=6.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 issued rwts: total=2497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.526 filename1: (groupid=0, jobs=1): err= 0: pid=90541: Thu Apr 18 15:16:56 2024 00:29:42.526 read: IOPS=227, BW=911KiB/s (933kB/s)(9136KiB/10024msec) 00:29:42.526 slat (usec): min=4, max=172, avg=11.48, stdev=11.81 00:29:42.526 clat (msec): min=27, max=154, avg=70.14, stdev=20.73 00:29:42.526 lat (msec): min=27, max=154, avg=70.15, stdev=20.73 00:29:42.526 clat percentiles (msec): 00:29:42.526 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 55], 00:29:42.526 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 72], 00:29:42.526 | 70.00th=[ 80], 80.00th=[ 87], 90.00th=[ 97], 95.00th=[ 111], 00:29:42.526 | 99.00th=[ 129], 99.50th=[ 130], 99.90th=[ 155], 99.95th=[ 155], 00:29:42.526 | 99.99th=[ 155] 00:29:42.526 bw ( KiB/s): min= 640, max= 1152, per=3.86%, avg=907.20, stdev=149.40, samples=20 00:29:42.526 iops : min= 160, max= 288, avg=226.80, stdev=37.35, samples=20 00:29:42.526 lat (msec) : 50=15.28%, 100=77.28%, 250=7.44% 00:29:42.526 cpu : usr=38.39%, sys=1.86%, ctx=1066, majf=0, minf=9 00:29:42.526 IO depths : 1=2.3%, 2=5.0%, 4=14.1%, 8=67.8%, 16=10.8%, 32=0.0%, >=64=0.0% 00:29:42.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 issued rwts: total=2284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.526 filename1: (groupid=0, jobs=1): err= 0: pid=90542: Thu Apr 18 15:16:56 2024 00:29:42.526 read: IOPS=247, BW=991KiB/s (1015kB/s)(9948KiB/10036msec) 00:29:42.526 slat (usec): min=5, max=8028, avg=15.34, stdev=179.78 00:29:42.526 clat (msec): min=29, max=160, avg=64.44, stdev=23.30 00:29:42.526 lat (msec): min=29, max=160, avg=64.46, stdev=23.29 00:29:42.526 clat percentiles (msec): 00:29:42.526 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 46], 00:29:42.526 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 64], 00:29:42.526 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 100], 95.00th=[ 108], 00:29:42.526 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 161], 99.95th=[ 161], 00:29:42.526 | 99.99th=[ 161] 00:29:42.526 bw ( KiB/s): min= 472, max= 1248, per=4.20%, avg=987.70, stdev=216.24, samples=20 00:29:42.526 iops : min= 118, max= 312, avg=246.85, stdev=53.99, samples=20 00:29:42.526 lat (msec) : 50=32.09%, 100=59.47%, 250=8.44% 00:29:42.526 cpu : usr=32.13%, sys=1.49%, ctx=961, majf=0, minf=9 00:29:42.526 IO depths : 1=1.4%, 2=3.3%, 4=11.3%, 8=72.0%, 16=12.0%, 32=0.0%, >=64=0.0% 00:29:42.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 issued rwts: total=2487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.526 filename2: (groupid=0, jobs=1): err= 0: pid=90543: Thu Apr 18 15:16:56 2024 00:29:42.526 read: IOPS=213, BW=856KiB/s (876kB/s)(8568KiB/10010msec) 00:29:42.526 slat (usec): min=2, max=8024, avg=14.49, stdev=173.23 00:29:42.526 clat (msec): min=14, max=156, avg=74.69, stdev=22.32 00:29:42.526 lat (msec): min=14, max=156, avg=74.70, stdev=22.31 00:29:42.526 clat percentiles (msec): 00:29:42.526 | 1.00th=[ 32], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 58], 00:29:42.526 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 81], 00:29:42.526 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 104], 95.00th=[ 118], 00:29:42.526 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:29:42.526 | 99.99th=[ 157] 00:29:42.526 bw ( KiB/s): min= 640, max= 1064, per=3.64%, avg=854.37, stdev=110.64, samples=19 00:29:42.526 iops : min= 160, max= 266, avg=213.58, stdev=27.66, samples=19 00:29:42.526 lat (msec) : 20=0.42%, 50=11.11%, 100=77.50%, 250=10.97% 00:29:42.526 cpu : usr=31.26%, sys=1.68%, ctx=935, majf=0, minf=9 00:29:42.526 IO depths : 1=1.9%, 2=4.6%, 4=14.8%, 8=67.4%, 16=11.3%, 32=0.0%, >=64=0.0% 00:29:42.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.526 issued rwts: total=2142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.526 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.526 filename2: (groupid=0, jobs=1): err= 0: pid=90544: Thu Apr 18 15:16:56 2024 00:29:42.526 read: IOPS=262, BW=1051KiB/s (1077kB/s)(10.3MiB/10037msec) 00:29:42.526 slat (usec): min=4, max=7045, avg=14.30, stdev=157.75 00:29:42.526 clat (msec): min=13, max=128, avg=60.79, stdev=19.86 00:29:42.526 lat (msec): min=13, max=128, avg=60.80, stdev=19.87 00:29:42.526 clat percentiles (msec): 00:29:42.527 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 44], 00:29:42.527 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 62], 00:29:42.527 | 70.00th=[ 67], 80.00th=[ 77], 90.00th=[ 90], 95.00th=[ 99], 00:29:42.527 | 99.00th=[ 117], 99.50th=[ 125], 99.90th=[ 129], 99.95th=[ 129], 00:29:42.527 | 99.99th=[ 129] 00:29:42.527 bw ( KiB/s): min= 768, max= 1424, per=4.46%, avg=1048.45, stdev=187.03, samples=20 00:29:42.527 iops : min= 192, max= 356, avg=262.10, stdev=46.73, samples=20 00:29:42.527 lat (msec) : 20=0.61%, 50=34.57%, 100=60.16%, 250=4.66% 00:29:42.527 cpu : usr=41.33%, sys=2.26%, ctx=1258, majf=0, minf=10 00:29:42.527 IO depths : 1=1.5%, 2=3.5%, 4=11.4%, 8=71.9%, 16=11.7%, 32=0.0%, >=64=0.0% 00:29:42.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.527 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.527 issued rwts: total=2638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.527 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.527 filename2: (groupid=0, jobs=1): err= 0: pid=90545: Thu Apr 18 15:16:56 2024 00:29:42.527 read: IOPS=222, BW=891KiB/s (912kB/s)(8920KiB/10013msec) 00:29:42.527 slat (usec): min=2, max=199, avg=10.06, stdev= 6.07 00:29:42.527 clat (msec): min=30, max=150, avg=71.73, stdev=22.28 00:29:42.527 lat (msec): min=30, max=150, avg=71.74, stdev=22.28 00:29:42.527 clat percentiles (msec): 00:29:42.527 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 55], 00:29:42.527 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 72], 00:29:42.527 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 102], 95.00th=[ 115], 00:29:42.527 | 99.00th=[ 133], 99.50th=[ 140], 99.90th=[ 150], 99.95th=[ 150], 00:29:42.527 | 99.99th=[ 150] 00:29:42.527 bw ( KiB/s): min= 640, max= 1248, per=3.74%, avg=879.58, stdev=164.96, samples=19 00:29:42.527 iops : min= 160, max= 312, avg=219.89, stdev=41.24, samples=19 00:29:42.527 lat (msec) : 50=14.53%, 100=75.11%, 250=10.36% 00:29:42.527 cpu : usr=43.12%, sys=2.09%, ctx=1287, majf=0, minf=9 00:29:42.527 IO depths : 1=3.1%, 2=6.9%, 4=17.1%, 8=63.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:29:42.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.527 complete : 0=0.0%, 4=92.0%, 8=2.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.527 issued rwts: total=2230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.527 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.527 filename2: (groupid=0, jobs=1): err= 0: pid=90546: Thu Apr 18 15:16:56 2024 00:29:42.527 read: IOPS=266, BW=1065KiB/s (1090kB/s)(10.4MiB/10033msec) 00:29:42.527 slat (usec): min=5, max=8005, avg=13.30, stdev=154.77 00:29:42.527 clat (msec): min=25, max=126, avg=59.88, stdev=16.34 00:29:42.527 lat (msec): min=25, max=126, avg=59.90, stdev=16.34 00:29:42.527 clat percentiles (msec): 00:29:42.527 | 1.00th=[ 30], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 45], 00:29:42.527 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 62], 00:29:42.527 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 93], 00:29:42.527 | 99.00th=[ 99], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 127], 00:29:42.527 | 99.99th=[ 127] 00:29:42.527 bw ( KiB/s): min= 816, max= 1456, per=4.54%, avg=1065.95, stdev=146.57, samples=20 00:29:42.527 iops : min= 204, max= 364, avg=266.45, stdev=36.64, samples=20 00:29:42.527 lat (msec) : 50=30.51%, 100=68.59%, 250=0.90% 00:29:42.527 cpu : usr=40.92%, sys=1.96%, ctx=1333, majf=0, minf=9 00:29:42.527 IO depths : 1=1.7%, 2=3.6%, 4=10.7%, 8=72.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:29:42.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.527 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.527 issued rwts: total=2671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.527 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.527 filename2: (groupid=0, jobs=1): err= 0: pid=90547: Thu Apr 18 15:16:56 2024 00:29:42.527 read: IOPS=222, BW=890KiB/s (911kB/s)(8916KiB/10020msec) 00:29:42.527 slat (usec): min=3, max=7987, avg=14.42, stdev=169.03 00:29:42.527 clat (msec): min=32, max=155, avg=71.80, stdev=20.65 00:29:42.527 lat (msec): min=32, max=155, avg=71.81, stdev=20.66 00:29:42.527 clat percentiles (msec): 00:29:42.527 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 57], 00:29:42.527 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 73], 00:29:42.527 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 97], 95.00th=[ 107], 00:29:42.527 | 99.00th=[ 129], 99.50th=[ 140], 99.90th=[ 157], 99.95th=[ 157], 00:29:42.527 | 99.99th=[ 157] 00:29:42.527 bw ( KiB/s): min= 640, max= 1024, per=3.78%, avg=887.60, stdev=110.59, samples=20 00:29:42.527 iops : min= 160, max= 256, avg=221.90, stdev=27.65, samples=20 00:29:42.527 lat (msec) : 50=14.09%, 100=77.84%, 250=8.08% 00:29:42.527 cpu : usr=32.06%, sys=1.59%, ctx=964, majf=0, minf=9 00:29:42.527 IO depths : 1=2.2%, 2=4.8%, 4=13.1%, 8=68.6%, 16=11.3%, 32=0.0%, >=64=0.0% 00:29:42.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.527 complete : 0=0.0%, 4=90.9%, 8=4.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.527 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.527 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.527 filename2: (groupid=0, jobs=1): err= 0: pid=90548: Thu Apr 18 15:16:56 2024 00:29:42.527 read: IOPS=216, BW=867KiB/s (887kB/s)(8672KiB/10007msec) 00:29:42.527 slat (nsec): min=2837, max=42308, avg=9983.08, stdev=4504.90 00:29:42.527 clat (msec): min=8, max=152, avg=73.78, stdev=23.61 00:29:42.527 lat (msec): min=8, max=152, avg=73.79, stdev=23.61 00:29:42.527 clat percentiles (msec): 00:29:42.527 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 58], 00:29:42.527 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 78], 00:29:42.527 | 70.00th=[ 85], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 122], 00:29:42.527 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 153], 99.95th=[ 153], 00:29:42.527 | 99.99th=[ 153] 00:29:42.527 bw ( KiB/s): min= 512, max= 1152, per=3.65%, avg=858.84, stdev=162.96, samples=19 00:29:42.527 iops : min= 128, max= 288, avg=214.68, stdev=40.75, samples=19 00:29:42.527 lat (msec) : 10=0.74%, 50=14.67%, 100=70.53%, 250=14.07% 00:29:42.527 cpu : usr=32.62%, sys=1.60%, ctx=942, majf=0, minf=9 00:29:42.527 IO depths : 1=2.4%, 2=5.1%, 4=14.6%, 8=67.2%, 16=10.8%, 32=0.0%, >=64=0.0% 00:29:42.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.527 complete : 0=0.0%, 4=91.1%, 8=3.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.527 issued rwts: total=2168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.527 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.527 filename2: (groupid=0, jobs=1): err= 0: pid=90549: Thu Apr 18 15:16:56 2024 00:29:42.527 read: IOPS=218, BW=873KiB/s (894kB/s)(8744KiB/10012msec) 00:29:42.527 slat (usec): min=2, max=8026, avg=17.57, stdev=242.42 00:29:42.527 clat (msec): min=34, max=157, avg=73.18, stdev=21.85 00:29:42.527 lat (msec): min=34, max=157, avg=73.19, stdev=21.84 00:29:42.527 clat percentiles (msec): 00:29:42.527 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:29:42.527 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 74], 00:29:42.527 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 111], 00:29:42.527 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 159], 00:29:42.527 | 99.99th=[ 159] 00:29:42.527 bw ( KiB/s): min= 640, max= 1024, per=3.70%, avg=868.05, stdev=108.11, samples=20 00:29:42.527 iops : min= 160, max= 256, avg=217.00, stdev=27.02, samples=20 00:29:42.527 lat (msec) : 50=12.17%, 100=76.72%, 250=11.12% 00:29:42.527 cpu : usr=34.34%, sys=1.70%, ctx=1001, majf=0, minf=9 00:29:42.527 IO depths : 1=2.3%, 2=5.4%, 4=15.2%, 8=66.2%, 16=10.8%, 32=0.0%, >=64=0.0% 00:29:42.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.527 complete : 0=0.0%, 4=91.6%, 8=3.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.527 issued rwts: total=2186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.527 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.527 filename2: (groupid=0, jobs=1): err= 0: pid=90550: Thu Apr 18 15:16:56 2024 00:29:42.527 read: IOPS=235, BW=943KiB/s (965kB/s)(9448KiB/10024msec) 00:29:42.527 slat (usec): min=5, max=4021, avg=15.99, stdev=134.78 00:29:42.527 clat (msec): min=32, max=143, avg=67.79, stdev=19.82 00:29:42.527 lat (msec): min=32, max=143, avg=67.80, stdev=19.82 00:29:42.527 clat percentiles (msec): 00:29:42.527 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 52], 00:29:42.527 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:29:42.527 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 105], 00:29:42.527 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 144], 99.95th=[ 144], 00:29:42.527 | 99.99th=[ 144] 00:29:42.527 bw ( KiB/s): min= 640, max= 1168, per=3.99%, avg=938.40, stdev=136.26, samples=20 00:29:42.527 iops : min= 160, max= 292, avg=234.60, stdev=34.07, samples=20 00:29:42.527 lat (msec) : 50=17.61%, 100=75.02%, 250=7.37% 00:29:42.527 cpu : usr=39.91%, sys=2.06%, ctx=1313, majf=0, minf=9 00:29:42.527 IO depths : 1=1.3%, 2=3.1%, 4=10.4%, 8=72.3%, 16=13.0%, 32=0.0%, >=64=0.0% 00:29:42.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.527 complete : 0=0.0%, 4=90.6%, 8=5.5%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.527 issued rwts: total=2362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.527 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:42.527 00:29:42.527 Run status group 0 (all jobs): 00:29:42.527 READ: bw=22.9MiB/s (24.0MB/s), 856KiB/s-1150KiB/s (876kB/s-1178kB/s), io=231MiB (242MB), run=10007-10055msec 00:29:42.527 15:16:56 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:42.527 15:16:56 -- target/dif.sh@43 -- # local sub 00:29:42.527 15:16:56 -- target/dif.sh@45 -- # for sub in "$@" 00:29:42.527 15:16:56 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:42.527 15:16:56 -- target/dif.sh@36 -- # local sub_id=0 00:29:42.527 15:16:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:42.527 15:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.527 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:29:42.527 15:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.527 15:16:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:42.527 15:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.527 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:29:42.527 15:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.527 15:16:56 -- target/dif.sh@45 -- # for sub in "$@" 00:29:42.528 15:16:56 -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:42.528 15:16:56 -- target/dif.sh@36 -- # local sub_id=1 00:29:42.528 15:16:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:42.528 15:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.528 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:29:42.528 15:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.528 15:16:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:42.528 15:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.528 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:29:42.528 15:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.528 15:16:56 -- target/dif.sh@45 -- # for sub in "$@" 00:29:42.528 15:16:56 -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:42.528 15:16:56 -- target/dif.sh@36 -- # local sub_id=2 00:29:42.528 15:16:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:42.528 15:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.528 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:29:42.528 15:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.528 15:16:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:42.528 15:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.528 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:29:42.528 15:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.528 15:16:56 -- target/dif.sh@115 -- # NULL_DIF=1 00:29:42.528 15:16:56 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:42.528 15:16:56 -- target/dif.sh@115 -- # numjobs=2 00:29:42.528 15:16:56 -- target/dif.sh@115 -- # iodepth=8 00:29:42.528 15:16:56 -- target/dif.sh@115 -- # runtime=5 00:29:42.528 15:16:56 -- target/dif.sh@115 -- # files=1 00:29:42.528 15:16:56 -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:42.528 15:16:56 -- target/dif.sh@28 -- # local sub 00:29:42.528 15:16:56 -- target/dif.sh@30 -- # for sub in "$@" 00:29:42.528 15:16:56 -- target/dif.sh@31 -- # create_subsystem 0 00:29:42.528 15:16:56 -- target/dif.sh@18 -- # local sub_id=0 00:29:42.528 15:16:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:42.528 15:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.528 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:29:42.528 bdev_null0 00:29:42.528 15:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.528 15:16:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:42.528 15:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.528 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:29:42.528 15:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.528 15:16:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:42.528 15:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.528 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:29:42.528 15:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.528 15:16:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:42.528 15:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.528 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:29:42.528 [2024-04-18 15:16:56.802748] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.528 15:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.528 15:16:56 -- target/dif.sh@30 -- # for sub in "$@" 00:29:42.528 15:16:56 -- target/dif.sh@31 -- # create_subsystem 1 00:29:42.528 15:16:56 -- target/dif.sh@18 -- # local sub_id=1 00:29:42.528 15:16:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:42.528 15:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.528 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:29:42.528 bdev_null1 00:29:42.528 15:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.528 15:16:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:42.528 15:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.528 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:29:42.528 15:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.528 15:16:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:42.528 15:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.528 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:29:42.528 15:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.528 15:16:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.528 15:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:42.528 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:29:42.528 15:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:42.528 15:16:56 -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:42.528 15:16:56 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:42.528 15:16:56 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:42.528 15:16:56 -- nvmf/common.sh@521 -- # config=() 00:29:42.528 15:16:56 -- nvmf/common.sh@521 -- # local subsystem config 00:29:42.528 15:16:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:42.528 15:16:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:42.528 { 00:29:42.528 "params": { 00:29:42.528 "name": "Nvme$subsystem", 00:29:42.528 "trtype": "$TEST_TRANSPORT", 00:29:42.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.528 "adrfam": "ipv4", 00:29:42.528 "trsvcid": "$NVMF_PORT", 00:29:42.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.528 "hdgst": ${hdgst:-false}, 00:29:42.528 "ddgst": ${ddgst:-false} 00:29:42.528 }, 00:29:42.528 "method": "bdev_nvme_attach_controller" 00:29:42.528 } 00:29:42.528 EOF 00:29:42.528 )") 00:29:42.528 15:16:56 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:42.528 15:16:56 -- target/dif.sh@82 -- # gen_fio_conf 00:29:42.528 15:16:56 -- target/dif.sh@54 -- # local file 00:29:42.528 15:16:56 -- target/dif.sh@56 -- # cat 00:29:42.528 15:16:56 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:42.528 15:16:56 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:42.528 15:16:56 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:42.528 15:16:56 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:42.528 15:16:56 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:42.528 15:16:56 -- common/autotest_common.sh@1327 -- # shift 00:29:42.528 15:16:56 -- nvmf/common.sh@543 -- # cat 00:29:42.528 15:16:56 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:42.528 15:16:56 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:42.528 15:16:56 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:42.528 15:16:56 -- target/dif.sh@72 -- # (( file <= files )) 00:29:42.528 15:16:56 -- target/dif.sh@73 -- # cat 00:29:42.528 15:16:56 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:42.528 15:16:56 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:42.528 15:16:56 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:42.528 15:16:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:42.528 15:16:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:42.528 { 00:29:42.528 "params": { 00:29:42.528 "name": "Nvme$subsystem", 00:29:42.528 "trtype": "$TEST_TRANSPORT", 00:29:42.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.528 "adrfam": "ipv4", 00:29:42.528 "trsvcid": "$NVMF_PORT", 00:29:42.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.528 "hdgst": ${hdgst:-false}, 00:29:42.528 "ddgst": ${ddgst:-false} 00:29:42.528 }, 00:29:42.528 "method": "bdev_nvme_attach_controller" 00:29:42.528 } 00:29:42.528 EOF 00:29:42.528 )") 00:29:42.528 15:16:56 -- target/dif.sh@72 -- # (( file++ )) 00:29:42.528 15:16:56 -- target/dif.sh@72 -- # (( file <= files )) 00:29:42.528 15:16:56 -- nvmf/common.sh@543 -- # cat 00:29:42.528 15:16:56 -- nvmf/common.sh@545 -- # jq . 00:29:42.528 15:16:56 -- nvmf/common.sh@546 -- # IFS=, 00:29:42.528 15:16:56 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:42.528 "params": { 00:29:42.528 "name": "Nvme0", 00:29:42.528 "trtype": "tcp", 00:29:42.528 "traddr": "10.0.0.2", 00:29:42.528 "adrfam": "ipv4", 00:29:42.528 "trsvcid": "4420", 00:29:42.528 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.528 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:42.528 "hdgst": false, 00:29:42.528 "ddgst": false 00:29:42.528 }, 00:29:42.528 "method": "bdev_nvme_attach_controller" 00:29:42.528 },{ 00:29:42.528 "params": { 00:29:42.528 "name": "Nvme1", 00:29:42.528 "trtype": "tcp", 00:29:42.528 "traddr": "10.0.0.2", 00:29:42.528 "adrfam": "ipv4", 00:29:42.528 "trsvcid": "4420", 00:29:42.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:42.528 "hdgst": false, 00:29:42.528 "ddgst": false 00:29:42.528 }, 00:29:42.528 "method": "bdev_nvme_attach_controller" 00:29:42.528 }' 00:29:42.528 15:16:56 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:42.528 15:16:56 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:42.528 15:16:56 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:42.528 15:16:56 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:42.528 15:16:56 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:29:42.528 15:16:56 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:42.528 15:16:56 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:42.528 15:16:56 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:42.528 15:16:56 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:42.528 15:16:56 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:42.528 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:42.528 ... 00:29:42.528 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:42.528 ... 00:29:42.528 fio-3.35 00:29:42.528 Starting 4 threads 00:29:47.800 00:29:47.800 filename0: (groupid=0, jobs=1): err= 0: pid=90683: Thu Apr 18 15:17:02 2024 00:29:47.800 read: IOPS=2353, BW=18.4MiB/s (19.3MB/s)(92.0MiB/5002msec) 00:29:47.800 slat (usec): min=5, max=282, avg= 8.40, stdev= 5.67 00:29:47.800 clat (usec): min=1464, max=5241, avg=3359.31, stdev=152.10 00:29:47.800 lat (usec): min=1474, max=5252, avg=3367.72, stdev=151.93 00:29:47.800 clat percentiles (usec): 00:29:47.800 | 1.00th=[ 2933], 5.00th=[ 3163], 10.00th=[ 3195], 20.00th=[ 3261], 00:29:47.800 | 30.00th=[ 3326], 40.00th=[ 3326], 50.00th=[ 3359], 60.00th=[ 3392], 00:29:47.800 | 70.00th=[ 3425], 80.00th=[ 3458], 90.00th=[ 3490], 95.00th=[ 3523], 00:29:47.800 | 99.00th=[ 3818], 99.50th=[ 3916], 99.90th=[ 4113], 99.95th=[ 4424], 00:29:47.800 | 99.99th=[ 4621] 00:29:47.800 bw ( KiB/s): min=18688, max=19200, per=25.06%, avg=18864.00, stdev=176.00, samples=9 00:29:47.800 iops : min= 2336, max= 2400, avg=2358.00, stdev=22.00, samples=9 00:29:47.800 lat (msec) : 2=0.09%, 4=99.71%, 10=0.20% 00:29:47.800 cpu : usr=91.36%, sys=7.28%, ctx=88, majf=0, minf=0 00:29:47.800 IO depths : 1=9.0%, 2=23.8%, 4=51.2%, 8=16.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:47.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.800 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.800 issued rwts: total=11771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.800 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:47.800 filename0: (groupid=0, jobs=1): err= 0: pid=90684: Thu Apr 18 15:17:02 2024 00:29:47.800 read: IOPS=2353, BW=18.4MiB/s (19.3MB/s)(91.9MiB/5001msec) 00:29:47.800 slat (nsec): min=5732, max=96638, avg=12362.21, stdev=4510.02 00:29:47.800 clat (usec): min=870, max=4992, avg=3342.59, stdev=179.93 00:29:47.801 lat (usec): min=876, max=5013, avg=3354.95, stdev=180.02 00:29:47.801 clat percentiles (usec): 00:29:47.801 | 1.00th=[ 2638], 5.00th=[ 3163], 10.00th=[ 3195], 20.00th=[ 3261], 00:29:47.801 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3359], 60.00th=[ 3359], 00:29:47.801 | 70.00th=[ 3392], 80.00th=[ 3425], 90.00th=[ 3458], 95.00th=[ 3523], 00:29:47.801 | 99.00th=[ 4047], 99.50th=[ 4146], 99.90th=[ 4293], 99.95th=[ 4293], 00:29:47.801 | 99.99th=[ 4490] 00:29:47.801 bw ( KiB/s): min=18560, max=19312, per=25.02%, avg=18830.22, stdev=230.40, samples=9 00:29:47.801 iops : min= 2320, max= 2414, avg=2353.78, stdev=28.80, samples=9 00:29:47.801 lat (usec) : 1000=0.07% 00:29:47.801 lat (msec) : 2=0.08%, 4=98.73%, 10=1.13% 00:29:47.801 cpu : usr=92.72%, sys=6.28%, ctx=81, majf=0, minf=0 00:29:47.801 IO depths : 1=10.3%, 2=25.0%, 4=50.0%, 8=14.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:47.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.801 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.801 issued rwts: total=11768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.801 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:47.801 filename1: (groupid=0, jobs=1): err= 0: pid=90685: Thu Apr 18 15:17:02 2024 00:29:47.801 read: IOPS=2351, BW=18.4MiB/s (19.3MB/s)(91.9MiB/5001msec) 00:29:47.801 slat (nsec): min=5718, max=51825, avg=10269.67, stdev=4440.79 00:29:47.801 clat (usec): min=1695, max=4485, avg=3361.93, stdev=205.70 00:29:47.801 lat (usec): min=1701, max=4498, avg=3372.20, stdev=205.67 00:29:47.801 clat percentiles (usec): 00:29:47.801 | 1.00th=[ 2737], 5.00th=[ 3064], 10.00th=[ 3195], 20.00th=[ 3261], 00:29:47.801 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3359], 60.00th=[ 3392], 00:29:47.801 | 70.00th=[ 3425], 80.00th=[ 3458], 90.00th=[ 3490], 95.00th=[ 3654], 00:29:47.801 | 99.00th=[ 4047], 99.50th=[ 4228], 99.90th=[ 4424], 99.95th=[ 4424], 00:29:47.801 | 99.99th=[ 4490] 00:29:47.801 bw ( KiB/s): min=18560, max=19200, per=25.04%, avg=18848.56, stdev=197.92, samples=9 00:29:47.801 iops : min= 2320, max= 2400, avg=2356.00, stdev=24.84, samples=9 00:29:47.801 lat (msec) : 2=0.03%, 4=98.73%, 10=1.24% 00:29:47.801 cpu : usr=92.72%, sys=6.30%, ctx=8, majf=0, minf=0 00:29:47.801 IO depths : 1=4.8%, 2=11.6%, 4=63.4%, 8=20.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:47.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.801 complete : 0=0.0%, 4=89.6%, 8=10.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.801 issued rwts: total=11760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.801 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:47.801 filename1: (groupid=0, jobs=1): err= 0: pid=90686: Thu Apr 18 15:17:02 2024 00:29:47.801 read: IOPS=2351, BW=18.4MiB/s (19.3MB/s)(91.9MiB/5001msec) 00:29:47.801 slat (nsec): min=5713, max=50091, avg=11193.10, stdev=4709.31 00:29:47.801 clat (usec): min=986, max=6612, avg=3356.13, stdev=264.18 00:29:47.801 lat (usec): min=992, max=6626, avg=3367.32, stdev=264.16 00:29:47.801 clat percentiles (usec): 00:29:47.801 | 1.00th=[ 2507], 5.00th=[ 3130], 10.00th=[ 3195], 20.00th=[ 3261], 00:29:47.801 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3359], 60.00th=[ 3392], 00:29:47.801 | 70.00th=[ 3425], 80.00th=[ 3425], 90.00th=[ 3490], 95.00th=[ 3523], 00:29:47.801 | 99.00th=[ 4293], 99.50th=[ 5014], 99.90th=[ 5604], 99.95th=[ 5735], 00:29:47.801 | 99.99th=[ 5932] 00:29:47.801 bw ( KiB/s): min=18560, max=19328, per=25.02%, avg=18830.22, stdev=229.29, samples=9 00:29:47.801 iops : min= 2320, max= 2416, avg=2353.78, stdev=28.66, samples=9 00:29:47.801 lat (usec) : 1000=0.03% 00:29:47.801 lat (msec) : 2=0.29%, 4=98.14%, 10=1.55% 00:29:47.801 cpu : usr=92.92%, sys=6.14%, ctx=7, majf=0, minf=0 00:29:47.801 IO depths : 1=5.0%, 2=13.9%, 4=61.1%, 8=20.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:47.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.801 complete : 0=0.0%, 4=89.6%, 8=10.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.801 issued rwts: total=11760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.801 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:47.801 00:29:47.801 Run status group 0 (all jobs): 00:29:47.801 READ: bw=73.5MiB/s (77.1MB/s), 18.4MiB/s-18.4MiB/s (19.3MB/s-19.3MB/s), io=368MiB (386MB), run=5001-5002msec 00:29:47.801 15:17:03 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:47.801 15:17:03 -- target/dif.sh@43 -- # local sub 00:29:47.801 15:17:03 -- target/dif.sh@45 -- # for sub in "$@" 00:29:47.801 15:17:03 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:47.801 15:17:03 -- target/dif.sh@36 -- # local sub_id=0 00:29:47.801 15:17:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:47.801 15:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.801 15:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:47.801 15:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.801 15:17:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:47.801 15:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.801 15:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:47.801 15:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.801 15:17:03 -- target/dif.sh@45 -- # for sub in "$@" 00:29:47.801 15:17:03 -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:47.801 15:17:03 -- target/dif.sh@36 -- # local sub_id=1 00:29:47.801 15:17:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:47.801 15:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.801 15:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:47.801 15:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.801 15:17:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:47.801 15:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.801 15:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:47.801 ************************************ 00:29:47.801 END TEST fio_dif_rand_params 00:29:47.801 ************************************ 00:29:47.801 15:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.801 00:29:47.801 real 0m23.930s 00:29:47.801 user 2m4.316s 00:29:47.801 sys 0m8.079s 00:29:47.801 15:17:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:47.801 15:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:47.801 15:17:03 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:47.801 15:17:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:47.801 15:17:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:47.801 15:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:47.801 ************************************ 00:29:47.801 START TEST fio_dif_digest 00:29:47.801 ************************************ 00:29:47.801 15:17:03 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:29:47.801 15:17:03 -- target/dif.sh@123 -- # local NULL_DIF 00:29:47.801 15:17:03 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:47.801 15:17:03 -- target/dif.sh@125 -- # local hdgst ddgst 00:29:47.801 15:17:03 -- target/dif.sh@127 -- # NULL_DIF=3 00:29:47.801 15:17:03 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:47.801 15:17:03 -- target/dif.sh@127 -- # numjobs=3 00:29:47.801 15:17:03 -- target/dif.sh@127 -- # iodepth=3 00:29:47.801 15:17:03 -- target/dif.sh@127 -- # runtime=10 00:29:47.801 15:17:03 -- target/dif.sh@128 -- # hdgst=true 00:29:47.801 15:17:03 -- target/dif.sh@128 -- # ddgst=true 00:29:47.801 15:17:03 -- target/dif.sh@130 -- # create_subsystems 0 00:29:47.801 15:17:03 -- target/dif.sh@28 -- # local sub 00:29:47.801 15:17:03 -- target/dif.sh@30 -- # for sub in "$@" 00:29:47.801 15:17:03 -- target/dif.sh@31 -- # create_subsystem 0 00:29:47.801 15:17:03 -- target/dif.sh@18 -- # local sub_id=0 00:29:47.801 15:17:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:47.801 15:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.801 15:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:47.801 bdev_null0 00:29:47.801 15:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.801 15:17:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:47.801 15:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.801 15:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:47.801 15:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.801 15:17:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:47.801 15:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.801 15:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:47.801 15:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.801 15:17:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:47.801 15:17:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:47.801 15:17:03 -- common/autotest_common.sh@10 -- # set +x 00:29:47.801 [2024-04-18 15:17:03.272236] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.801 15:17:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:47.801 15:17:03 -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:47.801 15:17:03 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:47.801 15:17:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:47.801 15:17:03 -- nvmf/common.sh@521 -- # config=() 00:29:47.801 15:17:03 -- nvmf/common.sh@521 -- # local subsystem config 00:29:47.801 15:17:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:47.801 15:17:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:47.801 15:17:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:47.801 { 00:29:47.801 "params": { 00:29:47.801 "name": "Nvme$subsystem", 00:29:47.801 "trtype": "$TEST_TRANSPORT", 00:29:47.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:47.801 "adrfam": "ipv4", 00:29:47.801 "trsvcid": "$NVMF_PORT", 00:29:47.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:47.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:47.801 "hdgst": ${hdgst:-false}, 00:29:47.801 "ddgst": ${ddgst:-false} 00:29:47.801 }, 00:29:47.801 "method": "bdev_nvme_attach_controller" 00:29:47.801 } 00:29:47.801 EOF 00:29:47.801 )") 00:29:47.801 15:17:03 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:47.801 15:17:03 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:47.801 15:17:03 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:47.801 15:17:03 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:47.801 15:17:03 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:47.802 15:17:03 -- nvmf/common.sh@543 -- # cat 00:29:47.802 15:17:03 -- common/autotest_common.sh@1327 -- # shift 00:29:47.802 15:17:03 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:47.802 15:17:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:47.802 15:17:03 -- target/dif.sh@82 -- # gen_fio_conf 00:29:47.802 15:17:03 -- target/dif.sh@54 -- # local file 00:29:47.802 15:17:03 -- target/dif.sh@56 -- # cat 00:29:47.802 15:17:03 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:47.802 15:17:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:47.802 15:17:03 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:47.802 15:17:03 -- nvmf/common.sh@545 -- # jq . 00:29:47.802 15:17:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:47.802 15:17:03 -- nvmf/common.sh@546 -- # IFS=, 00:29:47.802 15:17:03 -- target/dif.sh@72 -- # (( file <= files )) 00:29:47.802 15:17:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:47.802 "params": { 00:29:47.802 "name": "Nvme0", 00:29:47.802 "trtype": "tcp", 00:29:47.802 "traddr": "10.0.0.2", 00:29:47.802 "adrfam": "ipv4", 00:29:47.802 "trsvcid": "4420", 00:29:47.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:47.802 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:47.802 "hdgst": true, 00:29:47.802 "ddgst": true 00:29:47.802 }, 00:29:47.802 "method": "bdev_nvme_attach_controller" 00:29:47.802 }' 00:29:47.802 15:17:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:47.802 15:17:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:47.802 15:17:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:47.802 15:17:03 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:29:47.802 15:17:03 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:47.802 15:17:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:47.802 15:17:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:29:47.802 15:17:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:29:47.802 15:17:03 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:47.802 15:17:03 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:48.060 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:48.060 ... 00:29:48.060 fio-3.35 00:29:48.060 Starting 3 threads 00:30:00.270 00:30:00.270 filename0: (groupid=0, jobs=1): err= 0: pid=90801: Thu Apr 18 15:17:14 2024 00:30:00.270 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(258MiB/10048msec) 00:30:00.270 slat (nsec): min=5932, max=36657, avg=10959.56, stdev=4341.76 00:30:00.270 clat (usec): min=8609, max=50609, avg=14577.87, stdev=1566.88 00:30:00.270 lat (usec): min=8616, max=50624, avg=14588.83, stdev=1567.06 00:30:00.270 clat percentiles (usec): 00:30:00.270 | 1.00th=[ 8979], 5.00th=[13173], 10.00th=[13698], 20.00th=[14091], 00:30:00.270 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14615], 60.00th=[14877], 00:30:00.270 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15664], 95.00th=[15926], 00:30:00.270 | 99.00th=[16319], 99.50th=[16450], 99.90th=[17171], 99.95th=[46924], 00:30:00.270 | 99.99th=[50594] 00:30:00.270 bw ( KiB/s): min=25344, max=27648, per=27.57%, avg=26357.70, stdev=673.34, samples=20 00:30:00.270 iops : min= 198, max= 216, avg=205.90, stdev= 5.29, samples=20 00:30:00.270 lat (msec) : 10=2.62%, 20=97.28%, 50=0.05%, 100=0.05% 00:30:00.270 cpu : usr=91.75%, sys=7.28%, ctx=67, majf=0, minf=9 00:30:00.270 IO depths : 1=17.6%, 2=82.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:00.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.270 issued rwts: total=2062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.270 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:00.270 filename0: (groupid=0, jobs=1): err= 0: pid=90802: Thu Apr 18 15:17:14 2024 00:30:00.270 read: IOPS=283, BW=35.4MiB/s (37.1MB/s)(354MiB/10006msec) 00:30:00.270 slat (usec): min=6, max=132, avg=13.15, stdev= 4.97 00:30:00.270 clat (usec): min=6532, max=52102, avg=10582.66, stdev=3030.10 00:30:00.270 lat (usec): min=6547, max=52116, avg=10595.81, stdev=3030.17 00:30:00.270 clat percentiles (usec): 00:30:00.270 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:30:00.270 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:30:00.270 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11469], 00:30:00.270 | 99.00th=[12256], 99.50th=[50070], 99.90th=[51643], 99.95th=[52167], 00:30:00.270 | 99.99th=[52167] 00:30:00.270 bw ( KiB/s): min=33280, max=38400, per=37.90%, avg=36227.60, stdev=1489.01, samples=20 00:30:00.270 iops : min= 260, max= 300, avg=283.00, stdev=11.63, samples=20 00:30:00.270 lat (msec) : 10=26.59%, 20=72.88%, 100=0.53% 00:30:00.270 cpu : usr=90.07%, sys=8.47%, ctx=43, majf=0, minf=0 00:30:00.270 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:00.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.270 issued rwts: total=2832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.270 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:00.270 filename0: (groupid=0, jobs=1): err= 0: pid=90803: Thu Apr 18 15:17:14 2024 00:30:00.270 read: IOPS=260, BW=32.6MiB/s (34.2MB/s)(326MiB/10007msec) 00:30:00.270 slat (nsec): min=5996, max=62364, avg=13153.82, stdev=5310.83 00:30:00.270 clat (usec): min=6003, max=14596, avg=11484.37, stdev=1156.61 00:30:00.270 lat (usec): min=6024, max=14618, avg=11497.53, stdev=1156.52 00:30:00.270 clat percentiles (usec): 00:30:00.270 | 1.00th=[ 6980], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10814], 00:30:00.270 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:30:00.270 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[13042], 00:30:00.270 | 99.00th=[13829], 99.50th=[14222], 99.90th=[14615], 99.95th=[14615], 00:30:00.270 | 99.99th=[14615] 00:30:00.270 bw ( KiB/s): min=32000, max=35072, per=34.91%, avg=33369.60, stdev=1001.09, samples=20 00:30:00.270 iops : min= 250, max= 274, avg=260.70, stdev= 7.82, samples=20 00:30:00.270 lat (msec) : 10=6.82%, 20=93.18% 00:30:00.270 cpu : usr=91.42%, sys=7.36%, ctx=14, majf=0, minf=0 00:30:00.270 IO depths : 1=2.5%, 2=97.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:00.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.270 issued rwts: total=2610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.270 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:00.270 00:30:00.270 Run status group 0 (all jobs): 00:30:00.270 READ: bw=93.4MiB/s (97.9MB/s), 25.7MiB/s-35.4MiB/s (26.9MB/s-37.1MB/s), io=938MiB (984MB), run=10006-10048msec 00:30:00.270 15:17:14 -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:00.270 15:17:14 -- target/dif.sh@43 -- # local sub 00:30:00.270 15:17:14 -- target/dif.sh@45 -- # for sub in "$@" 00:30:00.270 15:17:14 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:00.270 15:17:14 -- target/dif.sh@36 -- # local sub_id=0 00:30:00.270 15:17:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:00.270 15:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:00.270 15:17:14 -- common/autotest_common.sh@10 -- # set +x 00:30:00.270 15:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:00.270 15:17:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:00.270 15:17:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:00.270 15:17:14 -- common/autotest_common.sh@10 -- # set +x 00:30:00.270 ************************************ 00:30:00.270 END TEST fio_dif_digest 00:30:00.270 ************************************ 00:30:00.270 15:17:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:00.270 00:30:00.270 real 0m11.082s 00:30:00.270 user 0m28.094s 00:30:00.270 sys 0m2.641s 00:30:00.270 15:17:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:00.270 15:17:14 -- common/autotest_common.sh@10 -- # set +x 00:30:00.270 15:17:14 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:00.270 15:17:14 -- target/dif.sh@147 -- # nvmftestfini 00:30:00.270 15:17:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:00.270 15:17:14 -- nvmf/common.sh@117 -- # sync 00:30:00.270 15:17:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:00.270 15:17:14 -- nvmf/common.sh@120 -- # set +e 00:30:00.270 15:17:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:00.270 15:17:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:00.270 rmmod nvme_tcp 00:30:00.270 rmmod nvme_fabrics 00:30:00.270 rmmod nvme_keyring 00:30:00.270 15:17:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:00.270 15:17:14 -- nvmf/common.sh@124 -- # set -e 00:30:00.270 15:17:14 -- nvmf/common.sh@125 -- # return 0 00:30:00.270 15:17:14 -- nvmf/common.sh@478 -- # '[' -n 90001 ']' 00:30:00.270 15:17:14 -- nvmf/common.sh@479 -- # killprocess 90001 00:30:00.270 15:17:14 -- common/autotest_common.sh@936 -- # '[' -z 90001 ']' 00:30:00.270 15:17:14 -- common/autotest_common.sh@940 -- # kill -0 90001 00:30:00.270 15:17:14 -- common/autotest_common.sh@941 -- # uname 00:30:00.270 15:17:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:00.270 15:17:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90001 00:30:00.270 killing process with pid 90001 00:30:00.270 15:17:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:00.270 15:17:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:00.270 15:17:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90001' 00:30:00.270 15:17:14 -- common/autotest_common.sh@955 -- # kill 90001 00:30:00.270 15:17:14 -- common/autotest_common.sh@960 -- # wait 90001 00:30:00.270 15:17:14 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:30:00.270 15:17:14 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:00.270 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:00.270 Waiting for block devices as requested 00:30:00.270 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:00.270 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:00.270 15:17:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:00.270 15:17:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:00.270 15:17:15 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:00.270 15:17:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:00.271 15:17:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.271 15:17:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:00.271 15:17:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.271 15:17:15 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:00.271 00:30:00.271 real 1m0.758s 00:30:00.271 user 3m48.229s 00:30:00.271 sys 0m20.932s 00:30:00.271 15:17:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:00.271 ************************************ 00:30:00.271 END TEST nvmf_dif 00:30:00.271 ************************************ 00:30:00.271 15:17:15 -- common/autotest_common.sh@10 -- # set +x 00:30:00.271 15:17:15 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:00.271 15:17:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:00.271 15:17:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:00.271 15:17:15 -- common/autotest_common.sh@10 -- # set +x 00:30:00.271 ************************************ 00:30:00.271 START TEST nvmf_abort_qd_sizes 00:30:00.271 ************************************ 00:30:00.271 15:17:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:00.271 * Looking for test storage... 00:30:00.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:00.271 15:17:15 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:00.271 15:17:15 -- nvmf/common.sh@7 -- # uname -s 00:30:00.271 15:17:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.271 15:17:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.271 15:17:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.271 15:17:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.271 15:17:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.271 15:17:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.271 15:17:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.271 15:17:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.271 15:17:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.271 15:17:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.271 15:17:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:30:00.271 15:17:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:30:00.271 15:17:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.271 15:17:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.271 15:17:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:00.271 15:17:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.271 15:17:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:00.271 15:17:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.271 15:17:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.271 15:17:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.271 15:17:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.271 15:17:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.271 15:17:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.271 15:17:15 -- paths/export.sh@5 -- # export PATH 00:30:00.271 15:17:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.271 15:17:15 -- nvmf/common.sh@47 -- # : 0 00:30:00.271 15:17:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:00.271 15:17:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:00.271 15:17:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.271 15:17:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.271 15:17:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.271 15:17:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:00.271 15:17:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:00.271 15:17:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:00.271 15:17:15 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:00.271 15:17:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:00.271 15:17:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.271 15:17:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:00.271 15:17:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:00.271 15:17:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:00.271 15:17:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.271 15:17:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:00.271 15:17:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.271 15:17:15 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:30:00.271 15:17:15 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:30:00.271 15:17:15 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:30:00.271 15:17:15 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:30:00.271 15:17:15 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:30:00.271 15:17:15 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:30:00.271 15:17:15 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.271 15:17:15 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:00.271 15:17:15 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:00.271 15:17:15 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:00.271 15:17:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:00.271 15:17:15 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:00.271 15:17:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:00.271 15:17:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.271 15:17:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:00.271 15:17:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:00.271 15:17:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:00.271 15:17:15 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:00.271 15:17:15 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:00.271 15:17:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:00.271 Cannot find device "nvmf_tgt_br" 00:30:00.271 15:17:15 -- nvmf/common.sh@155 -- # true 00:30:00.271 15:17:15 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:00.271 Cannot find device "nvmf_tgt_br2" 00:30:00.271 15:17:15 -- nvmf/common.sh@156 -- # true 00:30:00.271 15:17:15 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:00.271 15:17:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:00.271 Cannot find device "nvmf_tgt_br" 00:30:00.271 15:17:15 -- nvmf/common.sh@158 -- # true 00:30:00.271 15:17:15 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:00.271 Cannot find device "nvmf_tgt_br2" 00:30:00.271 15:17:15 -- nvmf/common.sh@159 -- # true 00:30:00.271 15:17:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:00.271 15:17:15 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:00.271 15:17:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:00.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:00.271 15:17:15 -- nvmf/common.sh@162 -- # true 00:30:00.271 15:17:15 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:00.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:00.271 15:17:15 -- nvmf/common.sh@163 -- # true 00:30:00.271 15:17:15 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:00.271 15:17:15 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:00.271 15:17:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:00.271 15:17:15 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:00.271 15:17:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:00.271 15:17:15 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:00.271 15:17:15 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:00.271 15:17:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:00.271 15:17:15 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:00.271 15:17:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:00.271 15:17:15 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:00.271 15:17:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:00.271 15:17:15 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:00.271 15:17:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:00.271 15:17:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:00.271 15:17:15 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:00.529 15:17:15 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:00.529 15:17:15 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:00.529 15:17:15 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:00.529 15:17:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:00.529 15:17:16 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:00.529 15:17:16 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:00.529 15:17:16 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:00.529 15:17:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:00.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:00.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:30:00.529 00:30:00.529 --- 10.0.0.2 ping statistics --- 00:30:00.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.529 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:30:00.529 15:17:16 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:00.529 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:00.529 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:30:00.529 00:30:00.529 --- 10.0.0.3 ping statistics --- 00:30:00.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.529 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:30:00.529 15:17:16 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:00.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:00.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:30:00.529 00:30:00.529 --- 10.0.0.1 ping statistics --- 00:30:00.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.529 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:30:00.529 15:17:16 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:00.529 15:17:16 -- nvmf/common.sh@422 -- # return 0 00:30:00.529 15:17:16 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:30:00.529 15:17:16 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:01.096 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:01.354 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:01.354 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:01.354 15:17:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.354 15:17:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:01.354 15:17:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:01.354 15:17:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.354 15:17:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:01.354 15:17:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:01.354 15:17:17 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:01.354 15:17:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:01.354 15:17:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:01.354 15:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:01.354 15:17:17 -- nvmf/common.sh@470 -- # nvmfpid=91403 00:30:01.354 15:17:17 -- nvmf/common.sh@471 -- # waitforlisten 91403 00:30:01.354 15:17:17 -- common/autotest_common.sh@817 -- # '[' -z 91403 ']' 00:30:01.354 15:17:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.354 15:17:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:01.354 15:17:17 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:01.354 15:17:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.354 15:17:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:01.354 15:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:01.612 [2024-04-18 15:17:17.100118] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:30:01.612 [2024-04-18 15:17:17.100209] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.612 [2024-04-18 15:17:17.245400] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:01.870 [2024-04-18 15:17:17.345700] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.870 [2024-04-18 15:17:17.345767] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.870 [2024-04-18 15:17:17.345778] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.870 [2024-04-18 15:17:17.345787] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.870 [2024-04-18 15:17:17.345794] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.870 [2024-04-18 15:17:17.345879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.870 [2024-04-18 15:17:17.346276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:01.870 [2024-04-18 15:17:17.346935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:01.870 [2024-04-18 15:17:17.346938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.439 15:17:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:02.439 15:17:17 -- common/autotest_common.sh@850 -- # return 0 00:30:02.439 15:17:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:02.439 15:17:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:02.439 15:17:17 -- common/autotest_common.sh@10 -- # set +x 00:30:02.439 15:17:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.439 15:17:18 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:02.439 15:17:18 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:02.439 15:17:18 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:02.439 15:17:18 -- scripts/common.sh@309 -- # local bdf bdfs 00:30:02.439 15:17:18 -- scripts/common.sh@310 -- # local nvmes 00:30:02.439 15:17:18 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:30:02.439 15:17:18 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:02.439 15:17:18 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:30:02.439 15:17:18 -- scripts/common.sh@295 -- # local bdf= 00:30:02.439 15:17:18 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:30:02.439 15:17:18 -- scripts/common.sh@230 -- # local class 00:30:02.439 15:17:18 -- scripts/common.sh@231 -- # local subclass 00:30:02.439 15:17:18 -- scripts/common.sh@232 -- # local progif 00:30:02.439 15:17:18 -- scripts/common.sh@233 -- # printf %02x 1 00:30:02.439 15:17:18 -- scripts/common.sh@233 -- # class=01 00:30:02.439 15:17:18 -- scripts/common.sh@234 -- # printf %02x 8 00:30:02.439 15:17:18 -- scripts/common.sh@234 -- # subclass=08 00:30:02.439 15:17:18 -- scripts/common.sh@235 -- # printf %02x 2 00:30:02.439 15:17:18 -- scripts/common.sh@235 -- # progif=02 00:30:02.439 15:17:18 -- scripts/common.sh@237 -- # hash lspci 00:30:02.439 15:17:18 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:30:02.439 15:17:18 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:30:02.439 15:17:18 -- scripts/common.sh@240 -- # grep -i -- -p02 00:30:02.439 15:17:18 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:02.439 15:17:18 -- scripts/common.sh@242 -- # tr -d '"' 00:30:02.439 15:17:18 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:02.439 15:17:18 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:30:02.439 15:17:18 -- scripts/common.sh@15 -- # local i 00:30:02.439 15:17:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:30:02.439 15:17:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:02.439 15:17:18 -- scripts/common.sh@24 -- # return 0 00:30:02.439 15:17:18 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:30:02.439 15:17:18 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:02.439 15:17:18 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:30:02.439 15:17:18 -- scripts/common.sh@15 -- # local i 00:30:02.439 15:17:18 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:30:02.439 15:17:18 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:02.439 15:17:18 -- scripts/common.sh@24 -- # return 0 00:30:02.439 15:17:18 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:30:02.439 15:17:18 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:02.439 15:17:18 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:02.439 15:17:18 -- scripts/common.sh@320 -- # uname -s 00:30:02.439 15:17:18 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:02.439 15:17:18 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:02.439 15:17:18 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:02.439 15:17:18 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:30:02.439 15:17:18 -- scripts/common.sh@320 -- # uname -s 00:30:02.439 15:17:18 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:02.439 15:17:18 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:02.439 15:17:18 -- scripts/common.sh@325 -- # (( 2 )) 00:30:02.439 15:17:18 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:02.439 15:17:18 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:30:02.439 15:17:18 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:30:02.439 15:17:18 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:02.439 15:17:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:02.439 15:17:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:02.439 15:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:02.699 ************************************ 00:30:02.699 START TEST spdk_target_abort 00:30:02.699 ************************************ 00:30:02.699 15:17:18 -- common/autotest_common.sh@1111 -- # spdk_target 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:30:02.699 15:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.699 15:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:02.699 spdk_targetn1 00:30:02.699 15:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:02.699 15:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.699 15:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:02.699 [2024-04-18 15:17:18.285902] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.699 15:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:02.699 15:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.699 15:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:02.699 15:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:02.699 15:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.699 15:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:02.699 15:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:02.699 15:17:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:02.699 15:17:18 -- common/autotest_common.sh@10 -- # set +x 00:30:02.699 [2024-04-18 15:17:18.326068] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.699 15:17:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:02.699 15:17:18 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:05.986 Initializing NVMe Controllers 00:30:05.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:05.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:05.986 Initialization complete. Launching workers. 00:30:05.986 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13038, failed: 0 00:30:05.986 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1165, failed to submit 11873 00:30:05.986 success 765, unsuccess 400, failed 0 00:30:05.986 15:17:21 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:05.986 15:17:21 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:09.288 Initializing NVMe Controllers 00:30:09.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:09.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:09.288 Initialization complete. Launching workers. 00:30:09.288 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5900, failed: 0 00:30:09.288 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1245, failed to submit 4655 00:30:09.288 success 277, unsuccess 968, failed 0 00:30:09.288 15:17:24 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:09.288 15:17:24 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:12.575 Initializing NVMe Controllers 00:30:12.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:12.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:12.575 Initialization complete. Launching workers. 00:30:12.575 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33365, failed: 0 00:30:12.575 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2719, failed to submit 30646 00:30:12.575 success 546, unsuccess 2173, failed 0 00:30:12.575 15:17:28 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:12.575 15:17:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.575 15:17:28 -- common/autotest_common.sh@10 -- # set +x 00:30:12.575 15:17:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:12.575 15:17:28 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:12.575 15:17:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:12.575 15:17:28 -- common/autotest_common.sh@10 -- # set +x 00:30:13.511 15:17:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:13.512 15:17:29 -- target/abort_qd_sizes.sh@61 -- # killprocess 91403 00:30:13.512 15:17:29 -- common/autotest_common.sh@936 -- # '[' -z 91403 ']' 00:30:13.512 15:17:29 -- common/autotest_common.sh@940 -- # kill -0 91403 00:30:13.512 15:17:29 -- common/autotest_common.sh@941 -- # uname 00:30:13.771 15:17:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:13.771 15:17:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91403 00:30:13.771 killing process with pid 91403 00:30:13.771 15:17:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:13.771 15:17:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:13.771 15:17:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91403' 00:30:13.771 15:17:29 -- common/autotest_common.sh@955 -- # kill 91403 00:30:13.771 15:17:29 -- common/autotest_common.sh@960 -- # wait 91403 00:30:13.771 00:30:13.771 real 0m11.276s 00:30:13.771 user 0m45.444s 00:30:13.771 sys 0m2.311s 00:30:13.771 15:17:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:13.771 ************************************ 00:30:13.771 END TEST spdk_target_abort 00:30:13.771 ************************************ 00:30:13.771 15:17:29 -- common/autotest_common.sh@10 -- # set +x 00:30:14.030 15:17:29 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:14.030 15:17:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:14.030 15:17:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:14.030 15:17:29 -- common/autotest_common.sh@10 -- # set +x 00:30:14.030 ************************************ 00:30:14.030 START TEST kernel_target_abort 00:30:14.030 ************************************ 00:30:14.030 15:17:29 -- common/autotest_common.sh@1111 -- # kernel_target 00:30:14.030 15:17:29 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:14.030 15:17:29 -- nvmf/common.sh@717 -- # local ip 00:30:14.030 15:17:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:14.030 15:17:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:14.030 15:17:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.030 15:17:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.030 15:17:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:14.030 15:17:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.030 15:17:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:14.030 15:17:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:14.030 15:17:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:14.030 15:17:29 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:14.030 15:17:29 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:14.030 15:17:29 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:30:14.030 15:17:29 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:14.030 15:17:29 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:14.030 15:17:29 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:14.030 15:17:29 -- nvmf/common.sh@628 -- # local block nvme 00:30:14.030 15:17:29 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:30:14.030 15:17:29 -- nvmf/common.sh@631 -- # modprobe nvmet 00:30:14.030 15:17:29 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:14.030 15:17:29 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:14.598 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:14.598 Waiting for block devices as requested 00:30:14.598 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:14.856 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:14.856 15:17:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:14.856 15:17:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:14.856 15:17:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:30:14.856 15:17:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:14.856 15:17:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:14.856 15:17:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:14.856 15:17:30 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:30:14.856 15:17:30 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:14.856 15:17:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:14.856 No valid GPT data, bailing 00:30:14.856 15:17:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:14.857 15:17:30 -- scripts/common.sh@391 -- # pt= 00:30:14.857 15:17:30 -- scripts/common.sh@392 -- # return 1 00:30:14.857 15:17:30 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:30:14.857 15:17:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:14.857 15:17:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:30:14.857 15:17:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:30:14.857 15:17:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:30:14.857 15:17:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:14.857 15:17:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:14.857 15:17:30 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:30:14.857 15:17:30 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:30:14.857 15:17:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:30:15.116 No valid GPT data, bailing 00:30:15.116 15:17:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:30:15.116 15:17:30 -- scripts/common.sh@391 -- # pt= 00:30:15.116 15:17:30 -- scripts/common.sh@392 -- # return 1 00:30:15.116 15:17:30 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:30:15.116 15:17:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:15.116 15:17:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:30:15.116 15:17:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:30:15.116 15:17:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:30:15.116 15:17:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:15.116 15:17:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:15.116 15:17:30 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:30:15.116 15:17:30 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:30:15.116 15:17:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:30:15.116 No valid GPT data, bailing 00:30:15.116 15:17:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:30:15.116 15:17:30 -- scripts/common.sh@391 -- # pt= 00:30:15.116 15:17:30 -- scripts/common.sh@392 -- # return 1 00:30:15.116 15:17:30 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:30:15.116 15:17:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:15.116 15:17:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:15.116 15:17:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:30:15.116 15:17:30 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:30:15.116 15:17:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:15.116 15:17:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:15.116 15:17:30 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:30:15.116 15:17:30 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:30:15.116 15:17:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:15.116 No valid GPT data, bailing 00:30:15.116 15:17:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:15.116 15:17:30 -- scripts/common.sh@391 -- # pt= 00:30:15.116 15:17:30 -- scripts/common.sh@392 -- # return 1 00:30:15.116 15:17:30 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:30:15.116 15:17:30 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:30:15.116 15:17:30 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:15.116 15:17:30 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:15.116 15:17:30 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:15.116 15:17:30 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:15.116 15:17:30 -- nvmf/common.sh@656 -- # echo 1 00:30:15.116 15:17:30 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:30:15.116 15:17:30 -- nvmf/common.sh@658 -- # echo 1 00:30:15.116 15:17:30 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:30:15.116 15:17:30 -- nvmf/common.sh@661 -- # echo tcp 00:30:15.116 15:17:30 -- nvmf/common.sh@662 -- # echo 4420 00:30:15.116 15:17:30 -- nvmf/common.sh@663 -- # echo ipv4 00:30:15.116 15:17:30 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:15.116 15:17:30 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd --hostid=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd -a 10.0.0.1 -t tcp -s 4420 00:30:15.376 00:30:15.376 Discovery Log Number of Records 2, Generation counter 2 00:30:15.376 =====Discovery Log Entry 0====== 00:30:15.376 trtype: tcp 00:30:15.376 adrfam: ipv4 00:30:15.376 subtype: current discovery subsystem 00:30:15.376 treq: not specified, sq flow control disable supported 00:30:15.376 portid: 1 00:30:15.376 trsvcid: 4420 00:30:15.376 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:15.376 traddr: 10.0.0.1 00:30:15.376 eflags: none 00:30:15.376 sectype: none 00:30:15.376 =====Discovery Log Entry 1====== 00:30:15.376 trtype: tcp 00:30:15.376 adrfam: ipv4 00:30:15.376 subtype: nvme subsystem 00:30:15.376 treq: not specified, sq flow control disable supported 00:30:15.376 portid: 1 00:30:15.376 trsvcid: 4420 00:30:15.376 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:15.376 traddr: 10.0.0.1 00:30:15.376 eflags: none 00:30:15.376 sectype: none 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:15.376 15:17:30 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:18.663 Initializing NVMe Controllers 00:30:18.663 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:18.664 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:18.664 Initialization complete. Launching workers. 00:30:18.664 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37976, failed: 0 00:30:18.664 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37976, failed to submit 0 00:30:18.664 success 0, unsuccess 37976, failed 0 00:30:18.664 15:17:34 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:18.664 15:17:34 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:21.951 Initializing NVMe Controllers 00:30:21.951 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:21.951 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:21.951 Initialization complete. Launching workers. 00:30:21.951 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90466, failed: 0 00:30:21.951 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 41868, failed to submit 48598 00:30:21.951 success 0, unsuccess 41868, failed 0 00:30:21.951 15:17:37 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:21.951 15:17:37 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:25.242 Initializing NVMe Controllers 00:30:25.242 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:25.242 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:25.242 Initialization complete. Launching workers. 00:30:25.242 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 113890, failed: 0 00:30:25.242 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28444, failed to submit 85446 00:30:25.242 success 0, unsuccess 28444, failed 0 00:30:25.242 15:17:40 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:25.242 15:17:40 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:25.242 15:17:40 -- nvmf/common.sh@675 -- # echo 0 00:30:25.242 15:17:40 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:25.242 15:17:40 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:25.242 15:17:40 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:25.242 15:17:40 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:25.242 15:17:40 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:30:25.242 15:17:40 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:30:25.242 15:17:40 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:25.811 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:29.098 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:29.098 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:29.098 00:30:29.098 real 0m14.754s 00:30:29.098 user 0m6.298s 00:30:29.098 sys 0m5.865s 00:30:29.098 15:17:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:29.098 15:17:44 -- common/autotest_common.sh@10 -- # set +x 00:30:29.098 ************************************ 00:30:29.098 END TEST kernel_target_abort 00:30:29.098 ************************************ 00:30:29.098 15:17:44 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:29.098 15:17:44 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:29.098 15:17:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:29.098 15:17:44 -- nvmf/common.sh@117 -- # sync 00:30:29.098 15:17:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:29.098 15:17:44 -- nvmf/common.sh@120 -- # set +e 00:30:29.098 15:17:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:29.098 15:17:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:29.098 rmmod nvme_tcp 00:30:29.098 rmmod nvme_fabrics 00:30:29.098 rmmod nvme_keyring 00:30:29.098 15:17:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:29.098 15:17:44 -- nvmf/common.sh@124 -- # set -e 00:30:29.098 15:17:44 -- nvmf/common.sh@125 -- # return 0 00:30:29.098 15:17:44 -- nvmf/common.sh@478 -- # '[' -n 91403 ']' 00:30:29.098 15:17:44 -- nvmf/common.sh@479 -- # killprocess 91403 00:30:29.098 15:17:44 -- common/autotest_common.sh@936 -- # '[' -z 91403 ']' 00:30:29.098 15:17:44 -- common/autotest_common.sh@940 -- # kill -0 91403 00:30:29.098 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (91403) - No such process 00:30:29.098 Process with pid 91403 is not found 00:30:29.098 15:17:44 -- common/autotest_common.sh@963 -- # echo 'Process with pid 91403 is not found' 00:30:29.098 15:17:44 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:30:29.098 15:17:44 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:29.357 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:29.616 Waiting for block devices as requested 00:30:29.616 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:29.616 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:29.875 15:17:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:29.875 15:17:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:29.875 15:17:45 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:29.875 15:17:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:29.875 15:17:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.875 15:17:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:29.875 15:17:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.875 15:17:45 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:29.875 00:30:29.875 real 0m29.924s 00:30:29.875 user 0m53.017s 00:30:29.875 sys 0m10.115s 00:30:29.875 15:17:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:29.875 15:17:45 -- common/autotest_common.sh@10 -- # set +x 00:30:29.875 ************************************ 00:30:29.875 END TEST nvmf_abort_qd_sizes 00:30:29.875 ************************************ 00:30:29.875 15:17:45 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:29.875 15:17:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:29.875 15:17:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:29.875 15:17:45 -- common/autotest_common.sh@10 -- # set +x 00:30:29.875 ************************************ 00:30:29.875 START TEST keyring_file 00:30:29.875 ************************************ 00:30:29.875 15:17:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:30.134 * Looking for test storage... 00:30:30.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:30:30.134 15:17:45 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:30:30.134 15:17:45 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:30.134 15:17:45 -- nvmf/common.sh@7 -- # uname -s 00:30:30.134 15:17:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.134 15:17:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.134 15:17:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.134 15:17:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.134 15:17:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.134 15:17:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.134 15:17:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.134 15:17:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.134 15:17:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.135 15:17:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.135 15:17:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:30:30.135 15:17:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=04ecce02-6bd0-4bb0-8c3d-3e8a409b1efd 00:30:30.135 15:17:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.135 15:17:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.135 15:17:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:30.135 15:17:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.135 15:17:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:30.135 15:17:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.135 15:17:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.135 15:17:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.135 15:17:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.135 15:17:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.135 15:17:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.135 15:17:45 -- paths/export.sh@5 -- # export PATH 00:30:30.135 15:17:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.135 15:17:45 -- nvmf/common.sh@47 -- # : 0 00:30:30.135 15:17:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:30.135 15:17:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:30.135 15:17:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.135 15:17:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.135 15:17:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.135 15:17:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:30.135 15:17:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:30.135 15:17:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:30.135 15:17:45 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:30.135 15:17:45 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:30.135 15:17:45 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:30.135 15:17:45 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:30.135 15:17:45 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:30.135 15:17:45 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:30.135 15:17:45 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:30.135 15:17:45 -- keyring/common.sh@15 -- # local name key digest path 00:30:30.135 15:17:45 -- keyring/common.sh@17 -- # name=key0 00:30:30.135 15:17:45 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:30.135 15:17:45 -- keyring/common.sh@17 -- # digest=0 00:30:30.135 15:17:45 -- keyring/common.sh@18 -- # mktemp 00:30:30.135 15:17:45 -- keyring/common.sh@18 -- # path=/tmp/tmp.xRglJt0CuN 00:30:30.135 15:17:45 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:30.135 15:17:45 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:30.135 15:17:45 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:30.135 15:17:45 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:30:30.135 15:17:45 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:30:30.135 15:17:45 -- nvmf/common.sh@693 -- # digest=0 00:30:30.135 15:17:45 -- nvmf/common.sh@694 -- # python - 00:30:30.135 15:17:45 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xRglJt0CuN 00:30:30.135 15:17:45 -- keyring/common.sh@23 -- # echo /tmp/tmp.xRglJt0CuN 00:30:30.135 15:17:45 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.xRglJt0CuN 00:30:30.135 15:17:45 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:30.135 15:17:45 -- keyring/common.sh@15 -- # local name key digest path 00:30:30.135 15:17:45 -- keyring/common.sh@17 -- # name=key1 00:30:30.135 15:17:45 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:30.135 15:17:45 -- keyring/common.sh@17 -- # digest=0 00:30:30.135 15:17:45 -- keyring/common.sh@18 -- # mktemp 00:30:30.135 15:17:45 -- keyring/common.sh@18 -- # path=/tmp/tmp.5Xlh3ybDQg 00:30:30.135 15:17:45 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:30.135 15:17:45 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:30.135 15:17:45 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:30.135 15:17:45 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:30:30.135 15:17:45 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:30:30.135 15:17:45 -- nvmf/common.sh@693 -- # digest=0 00:30:30.135 15:17:45 -- nvmf/common.sh@694 -- # python - 00:30:30.394 15:17:45 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5Xlh3ybDQg 00:30:30.394 15:17:45 -- keyring/common.sh@23 -- # echo /tmp/tmp.5Xlh3ybDQg 00:30:30.394 15:17:45 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.5Xlh3ybDQg 00:30:30.394 15:17:45 -- keyring/file.sh@30 -- # tgtpid=92318 00:30:30.394 15:17:45 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:30.394 15:17:45 -- keyring/file.sh@32 -- # waitforlisten 92318 00:30:30.394 15:17:45 -- common/autotest_common.sh@817 -- # '[' -z 92318 ']' 00:30:30.394 15:17:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.394 15:17:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:30.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.394 15:17:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.394 15:17:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:30.394 15:17:45 -- common/autotest_common.sh@10 -- # set +x 00:30:30.394 [2024-04-18 15:17:45.910752] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:30:30.394 [2024-04-18 15:17:45.910843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92318 ] 00:30:30.394 [2024-04-18 15:17:46.050343] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.653 [2024-04-18 15:17:46.134815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.223 15:17:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:31.223 15:17:46 -- common/autotest_common.sh@850 -- # return 0 00:30:31.223 15:17:46 -- keyring/file.sh@33 -- # rpc_cmd 00:30:31.223 15:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.223 15:17:46 -- common/autotest_common.sh@10 -- # set +x 00:30:31.223 [2024-04-18 15:17:46.844121] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.223 null0 00:30:31.223 [2024-04-18 15:17:46.876008] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:31.223 [2024-04-18 15:17:46.876251] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:31.223 [2024-04-18 15:17:46.884020] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:31.223 15:17:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:31.223 15:17:46 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:31.223 15:17:46 -- common/autotest_common.sh@638 -- # local es=0 00:30:31.223 15:17:46 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:31.223 15:17:46 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:30:31.223 15:17:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:31.223 15:17:46 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:30:31.223 15:17:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:31.223 15:17:46 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:31.223 15:17:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:31.223 15:17:46 -- common/autotest_common.sh@10 -- # set +x 00:30:31.223 [2024-04-18 15:17:46.899973] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.2024/04/18 15:17:46 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:30:31.223 request: 00:30:31.223 { 00:30:31.223 "method": "nvmf_subsystem_add_listener", 00:30:31.223 "params": { 00:30:31.223 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:31.223 "secure_channel": false, 00:30:31.223 "listen_address": { 00:30:31.223 "trtype": "tcp", 00:30:31.223 "traddr": "127.0.0.1", 00:30:31.223 "trsvcid": "4420" 00:30:31.223 } 00:30:31.223 } 00:30:31.223 } 00:30:31.223 Got JSON-RPC error response 00:30:31.223 GoRPCClient: error on JSON-RPC call 00:30:31.223 15:17:46 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:30:31.223 15:17:46 -- common/autotest_common.sh@641 -- # es=1 00:30:31.223 15:17:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:31.223 15:17:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:31.223 15:17:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:31.223 15:17:46 -- keyring/file.sh@46 -- # bperfpid=92353 00:30:31.223 15:17:46 -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:31.223 15:17:46 -- keyring/file.sh@48 -- # waitforlisten 92353 /var/tmp/bperf.sock 00:30:31.223 15:17:46 -- common/autotest_common.sh@817 -- # '[' -z 92353 ']' 00:30:31.223 15:17:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:31.223 15:17:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:31.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:31.223 15:17:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:31.223 15:17:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:31.223 15:17:46 -- common/autotest_common.sh@10 -- # set +x 00:30:31.483 [2024-04-18 15:17:46.964649] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:30:31.483 [2024-04-18 15:17:46.964732] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92353 ] 00:30:31.483 [2024-04-18 15:17:47.107460] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.741 [2024-04-18 15:17:47.209785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.309 15:17:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:32.309 15:17:47 -- common/autotest_common.sh@850 -- # return 0 00:30:32.309 15:17:47 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xRglJt0CuN 00:30:32.309 15:17:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xRglJt0CuN 00:30:32.567 15:17:48 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5Xlh3ybDQg 00:30:32.567 15:17:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5Xlh3ybDQg 00:30:32.826 15:17:48 -- keyring/file.sh@51 -- # jq -r .path 00:30:32.826 15:17:48 -- keyring/file.sh@51 -- # get_key key0 00:30:32.826 15:17:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:32.826 15:17:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:32.826 15:17:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:33.085 15:17:48 -- keyring/file.sh@51 -- # [[ /tmp/tmp.xRglJt0CuN == \/\t\m\p\/\t\m\p\.\x\R\g\l\J\t\0\C\u\N ]] 00:30:33.085 15:17:48 -- keyring/file.sh@52 -- # get_key key1 00:30:33.085 15:17:48 -- keyring/file.sh@52 -- # jq -r .path 00:30:33.085 15:17:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:33.085 15:17:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:33.085 15:17:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:33.085 15:17:48 -- keyring/file.sh@52 -- # [[ /tmp/tmp.5Xlh3ybDQg == \/\t\m\p\/\t\m\p\.\5\X\l\h\3\y\b\D\Q\g ]] 00:30:33.343 15:17:48 -- keyring/file.sh@53 -- # get_refcnt key0 00:30:33.343 15:17:48 -- keyring/common.sh@12 -- # get_key key0 00:30:33.343 15:17:48 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:33.343 15:17:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:33.343 15:17:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:33.343 15:17:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:33.603 15:17:49 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:33.603 15:17:49 -- keyring/file.sh@54 -- # get_refcnt key1 00:30:33.603 15:17:49 -- keyring/common.sh@12 -- # get_key key1 00:30:33.603 15:17:49 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:33.603 15:17:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:33.603 15:17:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:33.603 15:17:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:33.603 15:17:49 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:33.603 15:17:49 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:33.603 15:17:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:33.863 [2024-04-18 15:17:49.461987] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:33.863 nvme0n1 00:30:33.863 15:17:49 -- keyring/file.sh@59 -- # get_refcnt key0 00:30:33.863 15:17:49 -- keyring/common.sh@12 -- # get_key key0 00:30:33.863 15:17:49 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:33.863 15:17:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:33.863 15:17:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:34.122 15:17:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:34.122 15:17:49 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:34.122 15:17:49 -- keyring/file.sh@60 -- # get_refcnt key1 00:30:34.122 15:17:49 -- keyring/common.sh@12 -- # get_key key1 00:30:34.122 15:17:49 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:34.122 15:17:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:34.122 15:17:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:34.122 15:17:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:34.381 15:17:49 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:34.381 15:17:49 -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:34.639 Running I/O for 1 seconds... 00:30:35.575 00:30:35.575 Latency(us) 00:30:35.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.575 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:35.575 nvme0n1 : 1.00 16608.83 64.88 0.00 0.00 7688.89 3816.35 14002.07 00:30:35.575 =================================================================================================================== 00:30:35.575 Total : 16608.83 64.88 0.00 0.00 7688.89 3816.35 14002.07 00:30:35.575 0 00:30:35.575 15:17:51 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:35.575 15:17:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:35.833 15:17:51 -- keyring/file.sh@65 -- # get_refcnt key0 00:30:35.833 15:17:51 -- keyring/common.sh@12 -- # get_key key0 00:30:35.833 15:17:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:35.833 15:17:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:35.833 15:17:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:35.833 15:17:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:36.092 15:17:51 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:36.092 15:17:51 -- keyring/file.sh@66 -- # get_refcnt key1 00:30:36.092 15:17:51 -- keyring/common.sh@12 -- # get_key key1 00:30:36.092 15:17:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:36.092 15:17:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:36.092 15:17:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:36.092 15:17:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:36.092 15:17:51 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:36.092 15:17:51 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:36.092 15:17:51 -- common/autotest_common.sh@638 -- # local es=0 00:30:36.092 15:17:51 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:36.092 15:17:51 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:30:36.092 15:17:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:36.092 15:17:51 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:30:36.092 15:17:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:36.092 15:17:51 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:36.092 15:17:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:36.350 [2024-04-18 15:17:51.950589] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:36.350 [2024-04-18 15:17:51.950612] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc8670 (107): Transport endpoint is not connected 00:30:36.350 [2024-04-18 15:17:51.951599] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc8670 (9): Bad file descriptor 00:30:36.351 [2024-04-18 15:17:51.952596] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:36.351 [2024-04-18 15:17:51.952612] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:36.351 [2024-04-18 15:17:51.952622] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:36.351 2024/04/18 15:17:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:30:36.351 request: 00:30:36.351 { 00:30:36.351 "method": "bdev_nvme_attach_controller", 00:30:36.351 "params": { 00:30:36.351 "name": "nvme0", 00:30:36.351 "trtype": "tcp", 00:30:36.351 "traddr": "127.0.0.1", 00:30:36.351 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:36.351 "adrfam": "ipv4", 00:30:36.351 "trsvcid": "4420", 00:30:36.351 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:36.351 "psk": "key1" 00:30:36.351 } 00:30:36.351 } 00:30:36.351 Got JSON-RPC error response 00:30:36.351 GoRPCClient: error on JSON-RPC call 00:30:36.351 15:17:51 -- common/autotest_common.sh@641 -- # es=1 00:30:36.351 15:17:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:36.351 15:17:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:36.351 15:17:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:36.351 15:17:51 -- keyring/file.sh@71 -- # get_refcnt key0 00:30:36.351 15:17:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:36.351 15:17:51 -- keyring/common.sh@12 -- # get_key key0 00:30:36.351 15:17:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:36.351 15:17:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:36.351 15:17:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:36.609 15:17:52 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:36.609 15:17:52 -- keyring/file.sh@72 -- # get_refcnt key1 00:30:36.609 15:17:52 -- keyring/common.sh@12 -- # get_key key1 00:30:36.609 15:17:52 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:36.609 15:17:52 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:36.609 15:17:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:36.609 15:17:52 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:36.868 15:17:52 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:36.868 15:17:52 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:36.868 15:17:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:37.127 15:17:52 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:37.127 15:17:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:37.127 15:17:52 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:37.127 15:17:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:37.127 15:17:52 -- keyring/file.sh@77 -- # jq length 00:30:37.385 15:17:53 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:37.385 15:17:53 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.xRglJt0CuN 00:30:37.386 15:17:53 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.xRglJt0CuN 00:30:37.386 15:17:53 -- common/autotest_common.sh@638 -- # local es=0 00:30:37.386 15:17:53 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.xRglJt0CuN 00:30:37.386 15:17:53 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:30:37.386 15:17:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:37.386 15:17:53 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:30:37.386 15:17:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:37.386 15:17:53 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xRglJt0CuN 00:30:37.386 15:17:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xRglJt0CuN 00:30:37.660 [2024-04-18 15:17:53.208044] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.xRglJt0CuN': 0100660 00:30:37.660 [2024-04-18 15:17:53.208105] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:37.660 2024/04/18 15:17:53 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.xRglJt0CuN], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:30:37.660 request: 00:30:37.660 { 00:30:37.660 "method": "keyring_file_add_key", 00:30:37.660 "params": { 00:30:37.660 "name": "key0", 00:30:37.660 "path": "/tmp/tmp.xRglJt0CuN" 00:30:37.660 } 00:30:37.660 } 00:30:37.660 Got JSON-RPC error response 00:30:37.660 GoRPCClient: error on JSON-RPC call 00:30:37.660 15:17:53 -- common/autotest_common.sh@641 -- # es=1 00:30:37.660 15:17:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:37.660 15:17:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:37.660 15:17:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:37.660 15:17:53 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.xRglJt0CuN 00:30:37.660 15:17:53 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xRglJt0CuN 00:30:37.660 15:17:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xRglJt0CuN 00:30:37.917 15:17:53 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.xRglJt0CuN 00:30:37.917 15:17:53 -- keyring/file.sh@88 -- # get_refcnt key0 00:30:37.917 15:17:53 -- keyring/common.sh@12 -- # get_key key0 00:30:37.917 15:17:53 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:37.917 15:17:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:37.917 15:17:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:37.917 15:17:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:38.175 15:17:53 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:38.175 15:17:53 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:38.175 15:17:53 -- common/autotest_common.sh@638 -- # local es=0 00:30:38.175 15:17:53 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:38.175 15:17:53 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:30:38.175 15:17:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:38.175 15:17:53 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:30:38.175 15:17:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:38.175 15:17:53 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:38.175 15:17:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:38.433 [2024-04-18 15:17:54.054833] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.xRglJt0CuN': No such file or directory 00:30:38.433 [2024-04-18 15:17:54.054895] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:38.433 [2024-04-18 15:17:54.054922] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:38.433 [2024-04-18 15:17:54.054932] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:38.433 [2024-04-18 15:17:54.054942] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:38.433 2024/04/18 15:17:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:30:38.433 request: 00:30:38.433 { 00:30:38.433 "method": "bdev_nvme_attach_controller", 00:30:38.433 "params": { 00:30:38.433 "name": "nvme0", 00:30:38.433 "trtype": "tcp", 00:30:38.433 "traddr": "127.0.0.1", 00:30:38.433 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:38.433 "adrfam": "ipv4", 00:30:38.433 "trsvcid": "4420", 00:30:38.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:38.433 "psk": "key0" 00:30:38.433 } 00:30:38.433 } 00:30:38.433 Got JSON-RPC error response 00:30:38.433 GoRPCClient: error on JSON-RPC call 00:30:38.433 15:17:54 -- common/autotest_common.sh@641 -- # es=1 00:30:38.433 15:17:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:38.433 15:17:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:38.433 15:17:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:38.433 15:17:54 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:38.433 15:17:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:38.690 15:17:54 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:38.690 15:17:54 -- keyring/common.sh@15 -- # local name key digest path 00:30:38.690 15:17:54 -- keyring/common.sh@17 -- # name=key0 00:30:38.690 15:17:54 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:38.690 15:17:54 -- keyring/common.sh@17 -- # digest=0 00:30:38.690 15:17:54 -- keyring/common.sh@18 -- # mktemp 00:30:38.690 15:17:54 -- keyring/common.sh@18 -- # path=/tmp/tmp.8ZcYyty6aw 00:30:38.690 15:17:54 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:38.690 15:17:54 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:38.690 15:17:54 -- nvmf/common.sh@691 -- # local prefix key digest 00:30:38.690 15:17:54 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:30:38.690 15:17:54 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:30:38.691 15:17:54 -- nvmf/common.sh@693 -- # digest=0 00:30:38.691 15:17:54 -- nvmf/common.sh@694 -- # python - 00:30:38.949 15:17:54 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8ZcYyty6aw 00:30:38.949 15:17:54 -- keyring/common.sh@23 -- # echo /tmp/tmp.8ZcYyty6aw 00:30:38.949 15:17:54 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.8ZcYyty6aw 00:30:38.949 15:17:54 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8ZcYyty6aw 00:30:38.949 15:17:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8ZcYyty6aw 00:30:39.235 15:17:54 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:39.235 15:17:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:39.493 nvme0n1 00:30:39.493 15:17:55 -- keyring/file.sh@99 -- # get_refcnt key0 00:30:39.493 15:17:55 -- keyring/common.sh@12 -- # get_key key0 00:30:39.493 15:17:55 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:39.493 15:17:55 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:39.493 15:17:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:39.493 15:17:55 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:39.751 15:17:55 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:39.751 15:17:55 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:39.751 15:17:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:40.008 15:17:55 -- keyring/file.sh@101 -- # get_key key0 00:30:40.008 15:17:55 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:40.008 15:17:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:40.008 15:17:55 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:40.008 15:17:55 -- keyring/file.sh@101 -- # jq -r .removed 00:30:40.265 15:17:55 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:40.265 15:17:55 -- keyring/file.sh@102 -- # get_refcnt key0 00:30:40.265 15:17:55 -- keyring/common.sh@12 -- # get_key key0 00:30:40.265 15:17:55 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:40.265 15:17:55 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:40.265 15:17:55 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:40.265 15:17:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:40.522 15:17:56 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:40.522 15:17:56 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:40.522 15:17:56 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:40.780 15:17:56 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:40.780 15:17:56 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:40.780 15:17:56 -- keyring/file.sh@104 -- # jq length 00:30:41.037 15:17:56 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:41.037 15:17:56 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8ZcYyty6aw 00:30:41.037 15:17:56 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8ZcYyty6aw 00:30:41.295 15:17:56 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5Xlh3ybDQg 00:30:41.295 15:17:56 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5Xlh3ybDQg 00:30:41.555 15:17:57 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:41.555 15:17:57 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:41.812 nvme0n1 00:30:41.812 15:17:57 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:41.812 15:17:57 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:42.069 15:17:57 -- keyring/file.sh@112 -- # config='{ 00:30:42.069 "subsystems": [ 00:30:42.069 { 00:30:42.069 "subsystem": "keyring", 00:30:42.069 "config": [ 00:30:42.069 { 00:30:42.069 "method": "keyring_file_add_key", 00:30:42.069 "params": { 00:30:42.069 "name": "key0", 00:30:42.069 "path": "/tmp/tmp.8ZcYyty6aw" 00:30:42.069 } 00:30:42.069 }, 00:30:42.069 { 00:30:42.069 "method": "keyring_file_add_key", 00:30:42.069 "params": { 00:30:42.069 "name": "key1", 00:30:42.069 "path": "/tmp/tmp.5Xlh3ybDQg" 00:30:42.069 } 00:30:42.069 } 00:30:42.069 ] 00:30:42.069 }, 00:30:42.069 { 00:30:42.069 "subsystem": "iobuf", 00:30:42.069 "config": [ 00:30:42.069 { 00:30:42.069 "method": "iobuf_set_options", 00:30:42.069 "params": { 00:30:42.069 "large_bufsize": 135168, 00:30:42.069 "large_pool_count": 1024, 00:30:42.069 "small_bufsize": 8192, 00:30:42.069 "small_pool_count": 8192 00:30:42.069 } 00:30:42.069 } 00:30:42.069 ] 00:30:42.069 }, 00:30:42.069 { 00:30:42.069 "subsystem": "sock", 00:30:42.069 "config": [ 00:30:42.069 { 00:30:42.069 "method": "sock_impl_set_options", 00:30:42.069 "params": { 00:30:42.069 "enable_ktls": false, 00:30:42.069 "enable_placement_id": 0, 00:30:42.069 "enable_quickack": false, 00:30:42.069 "enable_recv_pipe": true, 00:30:42.069 "enable_zerocopy_send_client": false, 00:30:42.069 "enable_zerocopy_send_server": true, 00:30:42.069 "impl_name": "posix", 00:30:42.069 "recv_buf_size": 2097152, 00:30:42.069 "send_buf_size": 2097152, 00:30:42.069 "tls_version": 0, 00:30:42.069 "zerocopy_threshold": 0 00:30:42.069 } 00:30:42.069 }, 00:30:42.069 { 00:30:42.069 "method": "sock_impl_set_options", 00:30:42.069 "params": { 00:30:42.069 "enable_ktls": false, 00:30:42.069 "enable_placement_id": 0, 00:30:42.069 "enable_quickack": false, 00:30:42.069 "enable_recv_pipe": true, 00:30:42.069 "enable_zerocopy_send_client": false, 00:30:42.069 "enable_zerocopy_send_server": true, 00:30:42.069 "impl_name": "ssl", 00:30:42.069 "recv_buf_size": 4096, 00:30:42.069 "send_buf_size": 4096, 00:30:42.069 "tls_version": 0, 00:30:42.069 "zerocopy_threshold": 0 00:30:42.069 } 00:30:42.070 } 00:30:42.070 ] 00:30:42.070 }, 00:30:42.070 { 00:30:42.070 "subsystem": "vmd", 00:30:42.070 "config": [] 00:30:42.070 }, 00:30:42.070 { 00:30:42.070 "subsystem": "accel", 00:30:42.070 "config": [ 00:30:42.070 { 00:30:42.070 "method": "accel_set_options", 00:30:42.070 "params": { 00:30:42.070 "buf_count": 2048, 00:30:42.070 "large_cache_size": 16, 00:30:42.070 "sequence_count": 2048, 00:30:42.070 "small_cache_size": 128, 00:30:42.070 "task_count": 2048 00:30:42.070 } 00:30:42.070 } 00:30:42.070 ] 00:30:42.070 }, 00:30:42.070 { 00:30:42.070 "subsystem": "bdev", 00:30:42.070 "config": [ 00:30:42.070 { 00:30:42.070 "method": "bdev_set_options", 00:30:42.070 "params": { 00:30:42.070 "bdev_auto_examine": true, 00:30:42.070 "bdev_io_cache_size": 256, 00:30:42.070 "bdev_io_pool_size": 65535, 00:30:42.070 "iobuf_large_cache_size": 16, 00:30:42.070 "iobuf_small_cache_size": 128 00:30:42.070 } 00:30:42.070 }, 00:30:42.070 { 00:30:42.070 "method": "bdev_raid_set_options", 00:30:42.070 "params": { 00:30:42.070 "process_window_size_kb": 1024 00:30:42.070 } 00:30:42.070 }, 00:30:42.070 { 00:30:42.070 "method": "bdev_iscsi_set_options", 00:30:42.070 "params": { 00:30:42.070 "timeout_sec": 30 00:30:42.070 } 00:30:42.070 }, 00:30:42.070 { 00:30:42.070 "method": "bdev_nvme_set_options", 00:30:42.070 "params": { 00:30:42.070 "action_on_timeout": "none", 00:30:42.070 "allow_accel_sequence": false, 00:30:42.070 "arbitration_burst": 0, 00:30:42.070 "bdev_retry_count": 3, 00:30:42.070 "ctrlr_loss_timeout_sec": 0, 00:30:42.070 "delay_cmd_submit": true, 00:30:42.070 "dhchap_dhgroups": [ 00:30:42.070 "null", 00:30:42.070 "ffdhe2048", 00:30:42.070 "ffdhe3072", 00:30:42.070 "ffdhe4096", 00:30:42.070 "ffdhe6144", 00:30:42.070 "ffdhe8192" 00:30:42.070 ], 00:30:42.070 "dhchap_digests": [ 00:30:42.070 "sha256", 00:30:42.070 "sha384", 00:30:42.070 "sha512" 00:30:42.070 ], 00:30:42.070 "disable_auto_failback": false, 00:30:42.070 "fast_io_fail_timeout_sec": 0, 00:30:42.070 "generate_uuids": false, 00:30:42.070 "high_priority_weight": 0, 00:30:42.070 "io_path_stat": false, 00:30:42.070 "io_queue_requests": 512, 00:30:42.070 "keep_alive_timeout_ms": 10000, 00:30:42.070 "low_priority_weight": 0, 00:30:42.070 "medium_priority_weight": 0, 00:30:42.070 "nvme_adminq_poll_period_us": 10000, 00:30:42.070 "nvme_error_stat": false, 00:30:42.070 "nvme_ioq_poll_period_us": 0, 00:30:42.070 "rdma_cm_event_timeout_ms": 0, 00:30:42.070 "rdma_max_cq_size": 0, 00:30:42.070 "rdma_srq_size": 0, 00:30:42.070 "reconnect_delay_sec": 0, 00:30:42.070 "timeout_admin_us": 0, 00:30:42.070 "timeout_us": 0, 00:30:42.070 "transport_ack_timeout": 0, 00:30:42.070 "transport_retry_count": 4, 00:30:42.070 "transport_tos": 0 00:30:42.070 } 00:30:42.070 }, 00:30:42.070 { 00:30:42.070 "method": "bdev_nvme_attach_controller", 00:30:42.070 "params": { 00:30:42.070 "adrfam": "IPv4", 00:30:42.070 "ctrlr_loss_timeout_sec": 0, 00:30:42.070 "ddgst": false, 00:30:42.070 "fast_io_fail_timeout_sec": 0, 00:30:42.070 "hdgst": false, 00:30:42.070 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:42.070 "name": "nvme0", 00:30:42.070 "prchk_guard": false, 00:30:42.070 "prchk_reftag": false, 00:30:42.070 "psk": "key0", 00:30:42.070 "reconnect_delay_sec": 0, 00:30:42.070 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:42.070 "traddr": "127.0.0.1", 00:30:42.070 "trsvcid": "4420", 00:30:42.070 "trtype": "TCP" 00:30:42.070 } 00:30:42.070 }, 00:30:42.070 { 00:30:42.070 "method": "bdev_nvme_set_hotplug", 00:30:42.070 "params": { 00:30:42.070 "enable": false, 00:30:42.070 "period_us": 100000 00:30:42.070 } 00:30:42.070 }, 00:30:42.070 { 00:30:42.070 "method": "bdev_wait_for_examine" 00:30:42.070 } 00:30:42.070 ] 00:30:42.070 }, 00:30:42.070 { 00:30:42.070 "subsystem": "nbd", 00:30:42.070 "config": [] 00:30:42.070 } 00:30:42.070 ] 00:30:42.070 }' 00:30:42.070 15:17:57 -- keyring/file.sh@114 -- # killprocess 92353 00:30:42.070 15:17:57 -- common/autotest_common.sh@936 -- # '[' -z 92353 ']' 00:30:42.070 15:17:57 -- common/autotest_common.sh@940 -- # kill -0 92353 00:30:42.070 15:17:57 -- common/autotest_common.sh@941 -- # uname 00:30:42.070 15:17:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:42.070 15:17:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92353 00:30:42.070 15:17:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:42.070 15:17:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:42.070 killing process with pid 92353 00:30:42.070 15:17:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92353' 00:30:42.070 15:17:57 -- common/autotest_common.sh@955 -- # kill 92353 00:30:42.070 Received shutdown signal, test time was about 1.000000 seconds 00:30:42.070 00:30:42.070 Latency(us) 00:30:42.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.070 =================================================================================================================== 00:30:42.070 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:42.070 15:17:57 -- common/autotest_common.sh@960 -- # wait 92353 00:30:42.328 15:17:57 -- keyring/file.sh@117 -- # bperfpid=92821 00:30:42.328 15:17:57 -- keyring/file.sh@119 -- # waitforlisten 92821 /var/tmp/bperf.sock 00:30:42.328 15:17:57 -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:42.328 15:17:57 -- common/autotest_common.sh@817 -- # '[' -z 92821 ']' 00:30:42.328 15:17:57 -- keyring/file.sh@115 -- # echo '{ 00:30:42.328 "subsystems": [ 00:30:42.328 { 00:30:42.328 "subsystem": "keyring", 00:30:42.328 "config": [ 00:30:42.328 { 00:30:42.328 "method": "keyring_file_add_key", 00:30:42.328 "params": { 00:30:42.328 "name": "key0", 00:30:42.328 "path": "/tmp/tmp.8ZcYyty6aw" 00:30:42.328 } 00:30:42.328 }, 00:30:42.328 { 00:30:42.328 "method": "keyring_file_add_key", 00:30:42.328 "params": { 00:30:42.328 "name": "key1", 00:30:42.328 "path": "/tmp/tmp.5Xlh3ybDQg" 00:30:42.328 } 00:30:42.328 } 00:30:42.328 ] 00:30:42.328 }, 00:30:42.328 { 00:30:42.328 "subsystem": "iobuf", 00:30:42.328 "config": [ 00:30:42.328 { 00:30:42.328 "method": "iobuf_set_options", 00:30:42.328 "params": { 00:30:42.328 "large_bufsize": 135168, 00:30:42.328 "large_pool_count": 1024, 00:30:42.328 "small_bufsize": 8192, 00:30:42.328 "small_pool_count": 8192 00:30:42.328 } 00:30:42.328 } 00:30:42.328 ] 00:30:42.328 }, 00:30:42.328 { 00:30:42.328 "subsystem": "sock", 00:30:42.328 "config": [ 00:30:42.328 { 00:30:42.328 "method": "sock_impl_set_options", 00:30:42.328 "params": { 00:30:42.328 "enable_ktls": false, 00:30:42.328 "enable_placement_id": 0, 00:30:42.328 "enable_quickack": false, 00:30:42.328 "enable_recv_pipe": true, 00:30:42.328 "enable_zerocopy_send_client": false, 00:30:42.328 "enable_zerocopy_send_server": true, 00:30:42.328 "impl_name": "posix", 00:30:42.328 "recv_buf_size": 2097152, 00:30:42.328 "send_buf_size": 2097152, 00:30:42.328 "tls_version": 0, 00:30:42.328 "zerocopy_threshold": 0 00:30:42.328 } 00:30:42.328 }, 00:30:42.328 { 00:30:42.328 "method": "sock_impl_set_options", 00:30:42.328 "params": { 00:30:42.328 "enable_ktls": false, 00:30:42.328 "enable_placement_id": 0, 00:30:42.328 "enable_quickack": false, 00:30:42.328 "enable_recv_pipe": true, 00:30:42.328 "enable_zerocopy_send_client": false, 00:30:42.328 "enable_zerocopy_send_server": true, 00:30:42.328 "impl_name": "ssl", 00:30:42.328 "recv_buf_size": 4096, 00:30:42.328 "send_buf_size": 4096, 00:30:42.328 "tls_version": 0, 00:30:42.328 "zerocopy_threshold": 0 00:30:42.328 } 00:30:42.328 } 00:30:42.328 ] 00:30:42.328 }, 00:30:42.328 { 00:30:42.328 "subsystem": "vmd", 00:30:42.328 "config": [] 00:30:42.328 }, 00:30:42.328 { 00:30:42.328 "subsystem": "accel", 00:30:42.328 "config": [ 00:30:42.328 { 00:30:42.328 "method": "accel_set_options", 00:30:42.328 "params": { 00:30:42.328 "buf_count": 2048, 00:30:42.328 "large_cache_size": 16, 00:30:42.328 "sequence_count": 2048, 00:30:42.328 "small_cache_size": 128, 00:30:42.328 "task_count": 2048 00:30:42.328 } 00:30:42.328 } 00:30:42.328 ] 00:30:42.328 }, 00:30:42.328 { 00:30:42.328 "subsystem": "bdev", 00:30:42.328 "config": [ 00:30:42.328 { 00:30:42.328 "method": "bdev_set_options", 00:30:42.328 "params": { 00:30:42.328 "bdev_auto_examine": true, 00:30:42.328 "bdev_io_cache_size": 256, 00:30:42.328 "bdev_io_pool_size": 65535, 00:30:42.328 "iobuf_large_cache_size": 16, 00:30:42.328 "iobuf_small_cache_size": 128 00:30:42.328 } 00:30:42.328 }, 00:30:42.328 { 00:30:42.328 "method": "bdev_raid_set_options", 00:30:42.328 "params": { 00:30:42.328 "process_window_size_kb": 1024 00:30:42.328 } 00:30:42.328 }, 00:30:42.328 { 00:30:42.328 "method": "bdev_iscsi_set_options", 00:30:42.328 "params": { 00:30:42.328 "timeout_sec": 30 00:30:42.328 } 00:30:42.328 }, 00:30:42.328 { 00:30:42.328 "method": "bdev_nvme_set_options", 00:30:42.328 "params": { 00:30:42.328 "action_on_timeout": "none", 00:30:42.328 "allow_accel_sequence": false, 00:30:42.328 "arbitration_burst": 0, 00:30:42.328 "bdev_retry_count": 3, 00:30:42.328 "ctrlr_loss_timeout_sec": 0, 00:30:42.328 "delay_cmd_submit": true, 00:30:42.328 "dhchap_dhgroups": [ 00:30:42.328 "null", 00:30:42.328 "ffdhe2048", 00:30:42.328 "ffdhe3072", 00:30:42.328 "ffdhe4096", 00:30:42.328 "ffdhe6144", 00:30:42.328 "ffdhe8192" 00:30:42.328 ], 00:30:42.328 "dhchap_digests": [ 00:30:42.328 "sha256", 00:30:42.328 "sha384", 00:30:42.328 "sha512" 00:30:42.328 ], 00:30:42.328 "disable_auto_failback": false, 00:30:42.328 "fast_io_fail_timeout_sec": 0, 00:30:42.328 "generate_uuids": false, 00:30:42.328 "high_priority_weight": 0, 00:30:42.328 "io_path_stat": false, 00:30:42.328 "io_queue_requests": 512, 00:30:42.328 "keep_alive_timeout_ms": 10000, 00:30:42.328 "low_priority_weight": 0, 00:30:42.328 "medium_priority_weight": 0, 00:30:42.328 "nvme_adminq_poll_period_us": 10000, 00:30:42.328 "nvme_error_stat": false, 00:30:42.328 "nvme_ioq_poll_period_us": 0, 00:30:42.328 "rdma_cm_event_timeout_ms": 0, 00:30:42.328 "rdma_max_cq_size": 0, 00:30:42.328 "rdma_srq_size": 0, 00:30:42.328 "reconnect_delay_sec": 0, 00:30:42.328 "timeout_admin_us": 0, 00:30:42.328 "timeout_us": 0, 00:30:42.328 "transport_ack_timeout": 0, 00:30:42.328 "transport_retry_count": 4, 00:30:42.328 "transport_tos": 0 00:30:42.328 } 00:30:42.328 }, 00:30:42.328 { 00:30:42.328 "method": "bdev_nvme_attach_controller", 00:30:42.328 "params": { 00:30:42.328 "adrfam": "IPv4", 00:30:42.328 "ctrlr_loss_timeout_sec": 0, 00:30:42.328 "ddgst": false, 00:30:42.328 "fast_io_fail_timeout_sec": 0, 00:30:42.328 "hdgst": false, 00:30:42.328 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:42.328 "name": "nvme0", 00:30:42.328 "prchk_guard": false, 00:30:42.328 "prchk_reftag": false, 00:30:42.328 "psk": "key0", 00:30:42.328 "reconnect_delay_sec": 0, 00:30:42.328 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:42.328 "traddr": "127.0.0.1", 00:30:42.328 "trsvcid": "4420", 00:30:42.328 "trtype": "TCP" 00:30:42.328 } 00:30:42.328 }, 00:30:42.328 { 00:30:42.328 "method": "bdev_nvme_set_hotplug", 00:30:42.328 "params": { 00:30:42.328 "enable": false, 00:30:42.328 "period_us": 100000 00:30:42.328 } 00:30:42.328 }, 00:30:42.328 { 00:30:42.328 "method": "bdev_wait_for_examine" 00:30:42.328 } 00:30:42.328 ] 00:30:42.328 }, 00:30:42.328 { 00:30:42.328 "subsystem": "nbd", 00:30:42.328 "config": [] 00:30:42.328 } 00:30:42.328 ] 00:30:42.328 }' 00:30:42.328 15:17:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:42.328 15:17:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:42.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:42.329 15:17:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:42.329 15:17:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:42.329 15:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:42.585 [2024-04-18 15:17:58.044973] Starting SPDK v24.05-pre git sha1 ce34c7fd8 / DPDK 23.11.0 initialization... 00:30:42.585 [2024-04-18 15:17:58.045094] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92821 ] 00:30:42.585 [2024-04-18 15:17:58.184146] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.844 [2024-04-18 15:17:58.301836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.844 [2024-04-18 15:17:58.456206] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:43.411 15:17:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:43.411 15:17:59 -- common/autotest_common.sh@850 -- # return 0 00:30:43.411 15:17:59 -- keyring/file.sh@120 -- # jq length 00:30:43.411 15:17:59 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:43.411 15:17:59 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:43.669 15:17:59 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:43.669 15:17:59 -- keyring/file.sh@121 -- # get_refcnt key0 00:30:43.669 15:17:59 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:43.669 15:17:59 -- keyring/common.sh@12 -- # get_key key0 00:30:43.669 15:17:59 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:43.669 15:17:59 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:43.669 15:17:59 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:43.928 15:17:59 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:43.928 15:17:59 -- keyring/file.sh@122 -- # get_refcnt key1 00:30:43.928 15:17:59 -- keyring/common.sh@12 -- # get_key key1 00:30:43.928 15:17:59 -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:43.928 15:17:59 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:43.928 15:17:59 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:43.928 15:17:59 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:44.187 15:17:59 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:44.187 15:17:59 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:44.187 15:17:59 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:44.187 15:17:59 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:44.446 15:17:59 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:44.446 15:17:59 -- keyring/file.sh@1 -- # cleanup 00:30:44.446 15:17:59 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.8ZcYyty6aw /tmp/tmp.5Xlh3ybDQg 00:30:44.446 15:17:59 -- keyring/file.sh@20 -- # killprocess 92821 00:30:44.446 15:17:59 -- common/autotest_common.sh@936 -- # '[' -z 92821 ']' 00:30:44.446 15:17:59 -- common/autotest_common.sh@940 -- # kill -0 92821 00:30:44.446 15:17:59 -- common/autotest_common.sh@941 -- # uname 00:30:44.446 15:17:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:44.446 15:17:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92821 00:30:44.446 15:18:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:44.446 15:18:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:44.446 15:18:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92821' 00:30:44.446 killing process with pid 92821 00:30:44.446 15:18:00 -- common/autotest_common.sh@955 -- # kill 92821 00:30:44.446 Received shutdown signal, test time was about 1.000000 seconds 00:30:44.446 00:30:44.446 Latency(us) 00:30:44.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.446 =================================================================================================================== 00:30:44.446 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:44.446 15:18:00 -- common/autotest_common.sh@960 -- # wait 92821 00:30:44.706 15:18:00 -- keyring/file.sh@21 -- # killprocess 92318 00:30:44.706 15:18:00 -- common/autotest_common.sh@936 -- # '[' -z 92318 ']' 00:30:44.706 15:18:00 -- common/autotest_common.sh@940 -- # kill -0 92318 00:30:44.706 15:18:00 -- common/autotest_common.sh@941 -- # uname 00:30:44.706 15:18:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:44.706 15:18:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92318 00:30:44.706 15:18:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:44.706 15:18:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:44.706 15:18:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92318' 00:30:44.706 killing process with pid 92318 00:30:44.706 15:18:00 -- common/autotest_common.sh@955 -- # kill 92318 00:30:44.706 [2024-04-18 15:18:00.278178] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:44.706 15:18:00 -- common/autotest_common.sh@960 -- # wait 92318 00:30:44.965 00:30:44.965 real 0m15.110s 00:30:44.965 user 0m36.270s 00:30:44.965 sys 0m3.897s 00:30:44.965 15:18:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:44.965 15:18:00 -- common/autotest_common.sh@10 -- # set +x 00:30:44.965 ************************************ 00:30:44.965 END TEST keyring_file 00:30:44.965 ************************************ 00:30:45.225 15:18:00 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:30:45.225 15:18:00 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:30:45.225 15:18:00 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:30:45.225 15:18:00 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:30:45.225 15:18:00 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:30:45.225 15:18:00 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:30:45.225 15:18:00 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:30:45.225 15:18:00 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:30:45.225 15:18:00 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:30:45.225 15:18:00 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:30:45.225 15:18:00 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:30:45.225 15:18:00 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:30:45.225 15:18:00 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:30:45.225 15:18:00 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:30:45.225 15:18:00 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:30:45.225 15:18:00 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:30:45.225 15:18:00 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:30:45.225 15:18:00 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:30:45.225 15:18:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:45.225 15:18:00 -- common/autotest_common.sh@10 -- # set +x 00:30:45.225 15:18:00 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:30:45.225 15:18:00 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:30:45.225 15:18:00 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:30:45.225 15:18:00 -- common/autotest_common.sh@10 -- # set +x 00:30:47.756 INFO: APP EXITING 00:30:47.756 INFO: killing all VMs 00:30:47.756 INFO: killing vhost app 00:30:47.756 INFO: EXIT DONE 00:30:48.324 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:48.324 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:48.324 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:49.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:49.261 Cleaning 00:30:49.261 Removing: /var/run/dpdk/spdk0/config 00:30:49.261 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:49.261 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:49.261 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:49.261 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:49.261 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:49.261 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:49.261 Removing: /var/run/dpdk/spdk1/config 00:30:49.261 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:49.261 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:49.261 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:49.261 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:49.261 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:49.261 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:49.261 Removing: /var/run/dpdk/spdk2/config 00:30:49.261 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:49.261 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:49.261 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:49.261 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:49.261 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:49.261 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:49.261 Removing: /var/run/dpdk/spdk3/config 00:30:49.261 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:49.261 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:49.261 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:49.261 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:49.261 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:49.261 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:49.261 Removing: /var/run/dpdk/spdk4/config 00:30:49.261 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:49.261 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:49.261 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:49.261 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:49.261 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:49.261 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:49.261 Removing: /dev/shm/nvmf_trace.0 00:30:49.261 Removing: /dev/shm/spdk_tgt_trace.pid60056 00:30:49.261 Removing: /var/run/dpdk/spdk0 00:30:49.261 Removing: /var/run/dpdk/spdk1 00:30:49.261 Removing: /var/run/dpdk/spdk2 00:30:49.261 Removing: /var/run/dpdk/spdk3 00:30:49.261 Removing: /var/run/dpdk/spdk4 00:30:49.261 Removing: /var/run/dpdk/spdk_pid59887 00:30:49.520 Removing: /var/run/dpdk/spdk_pid60056 00:30:49.520 Removing: /var/run/dpdk/spdk_pid60354 00:30:49.520 Removing: /var/run/dpdk/spdk_pid60445 00:30:49.520 Removing: /var/run/dpdk/spdk_pid60490 00:30:49.520 Removing: /var/run/dpdk/spdk_pid60608 00:30:49.520 Removing: /var/run/dpdk/spdk_pid60638 00:30:49.520 Removing: /var/run/dpdk/spdk_pid60769 00:30:49.520 Removing: /var/run/dpdk/spdk_pid61044 00:30:49.521 Removing: /var/run/dpdk/spdk_pid61220 00:30:49.521 Removing: /var/run/dpdk/spdk_pid61307 00:30:49.521 Removing: /var/run/dpdk/spdk_pid61404 00:30:49.521 Removing: /var/run/dpdk/spdk_pid61497 00:30:49.521 Removing: /var/run/dpdk/spdk_pid61545 00:30:49.521 Removing: /var/run/dpdk/spdk_pid61579 00:30:49.521 Removing: /var/run/dpdk/spdk_pid61651 00:30:49.521 Removing: /var/run/dpdk/spdk_pid61762 00:30:49.521 Removing: /var/run/dpdk/spdk_pid62388 00:30:49.521 Removing: /var/run/dpdk/spdk_pid62457 00:30:49.521 Removing: /var/run/dpdk/spdk_pid62530 00:30:49.521 Removing: /var/run/dpdk/spdk_pid62558 00:30:49.521 Removing: /var/run/dpdk/spdk_pid62641 00:30:49.521 Removing: /var/run/dpdk/spdk_pid62669 00:30:49.521 Removing: /var/run/dpdk/spdk_pid62752 00:30:49.521 Removing: /var/run/dpdk/spdk_pid62780 00:30:49.521 Removing: /var/run/dpdk/spdk_pid62841 00:30:49.521 Removing: /var/run/dpdk/spdk_pid62871 00:30:49.521 Removing: /var/run/dpdk/spdk_pid62921 00:30:49.521 Removing: /var/run/dpdk/spdk_pid62951 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63108 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63149 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63233 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63312 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63340 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63416 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63457 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63496 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63535 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63580 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63614 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63658 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63691 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63735 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63768 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63812 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63851 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63889 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63929 00:30:49.521 Removing: /var/run/dpdk/spdk_pid63967 00:30:49.521 Removing: /var/run/dpdk/spdk_pid64009 00:30:49.521 Removing: /var/run/dpdk/spdk_pid64046 00:30:49.521 Removing: /var/run/dpdk/spdk_pid64093 00:30:49.521 Removing: /var/run/dpdk/spdk_pid64135 00:30:49.521 Removing: /var/run/dpdk/spdk_pid64173 00:30:49.521 Removing: /var/run/dpdk/spdk_pid64213 00:30:49.521 Removing: /var/run/dpdk/spdk_pid64288 00:30:49.521 Removing: /var/run/dpdk/spdk_pid64408 00:30:49.521 Removing: /var/run/dpdk/spdk_pid64838 00:30:49.521 Removing: /var/run/dpdk/spdk_pid68286 00:30:49.521 Removing: /var/run/dpdk/spdk_pid68635 00:30:49.521 Removing: /var/run/dpdk/spdk_pid69834 00:30:49.779 Removing: /var/run/dpdk/spdk_pid70211 00:30:49.779 Removing: /var/run/dpdk/spdk_pid70456 00:30:49.779 Removing: /var/run/dpdk/spdk_pid70506 00:30:49.779 Removing: /var/run/dpdk/spdk_pid71383 00:30:49.779 Removing: /var/run/dpdk/spdk_pid71433 00:30:49.779 Removing: /var/run/dpdk/spdk_pid71816 00:30:49.779 Removing: /var/run/dpdk/spdk_pid72347 00:30:49.779 Removing: /var/run/dpdk/spdk_pid72778 00:30:49.779 Removing: /var/run/dpdk/spdk_pid73747 00:30:49.779 Removing: /var/run/dpdk/spdk_pid74734 00:30:49.779 Removing: /var/run/dpdk/spdk_pid74859 00:30:49.779 Removing: /var/run/dpdk/spdk_pid74921 00:30:49.779 Removing: /var/run/dpdk/spdk_pid76405 00:30:49.779 Removing: /var/run/dpdk/spdk_pid76650 00:30:49.779 Removing: /var/run/dpdk/spdk_pid77085 00:30:49.779 Removing: /var/run/dpdk/spdk_pid77198 00:30:49.779 Removing: /var/run/dpdk/spdk_pid77345 00:30:49.779 Removing: /var/run/dpdk/spdk_pid77385 00:30:49.779 Removing: /var/run/dpdk/spdk_pid77435 00:30:49.779 Removing: /var/run/dpdk/spdk_pid77477 00:30:49.779 Removing: /var/run/dpdk/spdk_pid77630 00:30:49.779 Removing: /var/run/dpdk/spdk_pid77782 00:30:49.779 Removing: /var/run/dpdk/spdk_pid78042 00:30:49.779 Removing: /var/run/dpdk/spdk_pid78159 00:30:49.779 Removing: /var/run/dpdk/spdk_pid78413 00:30:49.779 Removing: /var/run/dpdk/spdk_pid78538 00:30:49.779 Removing: /var/run/dpdk/spdk_pid78668 00:30:49.779 Removing: /var/run/dpdk/spdk_pid79010 00:30:49.779 Removing: /var/run/dpdk/spdk_pid79440 00:30:49.779 Removing: /var/run/dpdk/spdk_pid79746 00:30:49.779 Removing: /var/run/dpdk/spdk_pid80268 00:30:49.779 Removing: /var/run/dpdk/spdk_pid80270 00:30:49.779 Removing: /var/run/dpdk/spdk_pid80615 00:30:49.779 Removing: /var/run/dpdk/spdk_pid80629 00:30:49.779 Removing: /var/run/dpdk/spdk_pid80654 00:30:49.779 Removing: /var/run/dpdk/spdk_pid80679 00:30:49.779 Removing: /var/run/dpdk/spdk_pid80685 00:30:49.779 Removing: /var/run/dpdk/spdk_pid81005 00:30:49.779 Removing: /var/run/dpdk/spdk_pid81048 00:30:49.779 Removing: /var/run/dpdk/spdk_pid81382 00:30:49.779 Removing: /var/run/dpdk/spdk_pid81634 00:30:49.779 Removing: /var/run/dpdk/spdk_pid82126 00:30:49.779 Removing: /var/run/dpdk/spdk_pid82667 00:30:49.779 Removing: /var/run/dpdk/spdk_pid83266 00:30:49.779 Removing: /var/run/dpdk/spdk_pid83274 00:30:49.779 Removing: /var/run/dpdk/spdk_pid85242 00:30:49.779 Removing: /var/run/dpdk/spdk_pid85328 00:30:49.779 Removing: /var/run/dpdk/spdk_pid85418 00:30:49.779 Removing: /var/run/dpdk/spdk_pid85505 00:30:49.779 Removing: /var/run/dpdk/spdk_pid85675 00:30:49.779 Removing: /var/run/dpdk/spdk_pid85765 00:30:49.779 Removing: /var/run/dpdk/spdk_pid85850 00:30:49.779 Removing: /var/run/dpdk/spdk_pid85940 00:30:49.779 Removing: /var/run/dpdk/spdk_pid86288 00:30:49.779 Removing: /var/run/dpdk/spdk_pid86985 00:30:49.779 Removing: /var/run/dpdk/spdk_pid88347 00:30:49.779 Removing: /var/run/dpdk/spdk_pid88552 00:30:49.779 Removing: /var/run/dpdk/spdk_pid88844 00:30:49.780 Removing: /var/run/dpdk/spdk_pid89152 00:30:49.780 Removing: /var/run/dpdk/spdk_pid89709 00:30:49.780 Removing: /var/run/dpdk/spdk_pid89720 00:30:49.780 Removing: /var/run/dpdk/spdk_pid90080 00:30:49.780 Removing: /var/run/dpdk/spdk_pid90248 00:30:50.037 Removing: /var/run/dpdk/spdk_pid90415 00:30:50.037 Removing: /var/run/dpdk/spdk_pid90513 00:30:50.037 Removing: /var/run/dpdk/spdk_pid90673 00:30:50.037 Removing: /var/run/dpdk/spdk_pid90792 00:30:50.037 Removing: /var/run/dpdk/spdk_pid91477 00:30:50.037 Removing: /var/run/dpdk/spdk_pid91508 00:30:50.037 Removing: /var/run/dpdk/spdk_pid91543 00:30:50.037 Removing: /var/run/dpdk/spdk_pid91813 00:30:50.037 Removing: /var/run/dpdk/spdk_pid91847 00:30:50.037 Removing: /var/run/dpdk/spdk_pid91878 00:30:50.037 Removing: /var/run/dpdk/spdk_pid92318 00:30:50.037 Removing: /var/run/dpdk/spdk_pid92353 00:30:50.037 Removing: /var/run/dpdk/spdk_pid92821 00:30:50.037 Clean 00:30:50.037 15:18:05 -- common/autotest_common.sh@1437 -- # return 0 00:30:50.037 15:18:05 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:30:50.037 15:18:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:50.037 15:18:05 -- common/autotest_common.sh@10 -- # set +x 00:30:50.037 15:18:05 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:30:50.037 15:18:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:50.037 15:18:05 -- common/autotest_common.sh@10 -- # set +x 00:30:50.037 15:18:05 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:50.037 15:18:05 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:50.037 15:18:05 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:50.037 15:18:05 -- spdk/autotest.sh@389 -- # hash lcov 00:30:50.037 15:18:05 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:50.037 15:18:05 -- spdk/autotest.sh@391 -- # hostname 00:30:50.038 15:18:05 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:50.294 geninfo: WARNING: invalid characters removed from testname! 00:31:16.841 15:18:30 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:18.216 15:18:33 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:20.749 15:18:36 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:22.652 15:18:38 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:25.188 15:18:40 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:27.093 15:18:42 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:29.007 15:18:44 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:29.267 15:18:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:29.267 15:18:44 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:29.267 15:18:44 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.267 15:18:44 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.267 15:18:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.267 15:18:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.267 15:18:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.267 15:18:44 -- paths/export.sh@5 -- $ export PATH 00:31:29.267 15:18:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.267 15:18:44 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:29.267 15:18:44 -- common/autobuild_common.sh@435 -- $ date +%s 00:31:29.267 15:18:44 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713453524.XXXXXX 00:31:29.267 15:18:44 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713453524.eSrJta 00:31:29.267 15:18:44 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:31:29.267 15:18:44 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:31:29.267 15:18:44 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:31:29.267 15:18:44 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:29.267 15:18:44 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:29.267 15:18:44 -- common/autobuild_common.sh@451 -- $ get_config_params 00:31:29.267 15:18:44 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:31:29.267 15:18:44 -- common/autotest_common.sh@10 -- $ set +x 00:31:29.267 15:18:44 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:31:29.267 15:18:44 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:31:29.267 15:18:44 -- pm/common@17 -- $ local monitor 00:31:29.267 15:18:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:29.267 15:18:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=94482 00:31:29.267 15:18:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:29.267 15:18:44 -- pm/common@21 -- $ date +%s 00:31:29.267 15:18:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=94484 00:31:29.267 15:18:44 -- pm/common@26 -- $ sleep 1 00:31:29.267 15:18:44 -- pm/common@21 -- $ date +%s 00:31:29.267 15:18:44 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713453524 00:31:29.267 15:18:44 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713453524 00:31:29.267 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713453524_collect-vmstat.pm.log 00:31:29.267 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713453524_collect-cpu-load.pm.log 00:31:30.204 15:18:45 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:31:30.204 15:18:45 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:30.204 15:18:45 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:30.204 15:18:45 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:30.204 15:18:45 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:30.204 15:18:45 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:30.205 15:18:45 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:30.205 15:18:45 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:30.205 15:18:45 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:30.463 15:18:45 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:30.463 15:18:45 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:30.463 15:18:45 -- pm/common@30 -- $ signal_monitor_resources TERM 00:31:30.464 15:18:45 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:31:30.464 15:18:45 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:30.464 15:18:45 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:31:30.464 15:18:45 -- pm/common@45 -- $ pid=94489 00:31:30.464 15:18:45 -- pm/common@52 -- $ sudo kill -TERM 94489 00:31:30.464 15:18:45 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:30.464 15:18:45 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:31:30.464 15:18:45 -- pm/common@45 -- $ pid=94490 00:31:30.464 15:18:45 -- pm/common@52 -- $ sudo kill -TERM 94490 00:31:30.464 + [[ -n 5104 ]] 00:31:30.464 + sudo kill 5104 00:31:30.473 [Pipeline] } 00:31:30.488 [Pipeline] // timeout 00:31:30.495 [Pipeline] } 00:31:30.512 [Pipeline] // stage 00:31:30.518 [Pipeline] } 00:31:30.531 [Pipeline] // catchError 00:31:30.541 [Pipeline] stage 00:31:30.543 [Pipeline] { (Stop VM) 00:31:30.559 [Pipeline] sh 00:31:30.866 + vagrant halt 00:31:34.157 ==> default: Halting domain... 00:31:40.735 [Pipeline] sh 00:31:41.013 + vagrant destroy -f 00:31:44.336 ==> default: Removing domain... 00:31:44.354 [Pipeline] sh 00:31:44.636 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:31:44.645 [Pipeline] } 00:31:44.664 [Pipeline] // stage 00:31:44.670 [Pipeline] } 00:31:44.689 [Pipeline] // dir 00:31:44.695 [Pipeline] } 00:31:44.711 [Pipeline] // wrap 00:31:44.717 [Pipeline] } 00:31:44.731 [Pipeline] // catchError 00:31:44.739 [Pipeline] stage 00:31:44.741 [Pipeline] { (Epilogue) 00:31:44.753 [Pipeline] sh 00:31:45.034 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:50.319 [Pipeline] catchError 00:31:50.321 [Pipeline] { 00:31:50.337 [Pipeline] sh 00:31:50.619 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:50.619 Artifacts sizes are good 00:31:50.628 [Pipeline] } 00:31:50.641 [Pipeline] // catchError 00:31:50.651 [Pipeline] archiveArtifacts 00:31:50.656 Archiving artifacts 00:31:50.787 [Pipeline] cleanWs 00:31:50.800 [WS-CLEANUP] Deleting project workspace... 00:31:50.800 [WS-CLEANUP] Deferred wipeout is used... 00:31:50.806 [WS-CLEANUP] done 00:31:50.807 [Pipeline] } 00:31:50.822 [Pipeline] // stage 00:31:50.828 [Pipeline] } 00:31:50.844 [Pipeline] // node 00:31:50.850 [Pipeline] End of Pipeline 00:31:50.891 Finished: SUCCESS